sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
| tokens_length
sequencelengths 1
353
| input_texts
sequencelengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a2bc8d5de70f89d889c35302656743bd5a00d576 |
# Dataset Card for ZINC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://zinc15.docking.org/)**
- **[Repository](https://www.dropbox.com/s/feo9qle74kg48gy/molecules.zip?dl=1):**:
- **Paper:**: ZINC 15 – Ligand Discovery for Everyone (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/)
### Dataset Summary
The `ZINC` dataset is a "curated collection of commercially available chemical compounds prepared especially for virtual screening" (Wikipedia).
### Supported Tasks and Leaderboards
`ZINC` should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.
The associated leaderboard is here: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-regression-on-zinc).
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 220011 |
| average #nodes | 23.15 |
| average #edges | 49.81 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import ZINC
dataset = ZINC(root = '', split='train') # valid, test
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.
### Citation Information
```bibtex
@article{doi:10.1021/acs.jcim.5b00559,
author = {Sterling, Teague and Irwin, John J.},
title = {ZINC 15 – Ligand Discovery for Everyone},
journal = {Journal of Chemical Information and Modeling},
volume = {55},
number = {11},
pages = {2324-2337},
year = {2015},
doi = {10.1021/acs.jcim.5b00559},
note ={PMID: 26479676},
URL = {
https://doi.org/10.1021/acs.jcim.5b00559
},
eprint = {
https://doi.org/10.1021/acs.jcim.5b00559
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/ZINC | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T14:11:09+00:00 | {"license": "unknown", "task_categories": ["graph-ml"], "dataset_info": {"features": [{"name": "node_feat", "sequence": {"sequence": "int64"}}, {"name": "edge_index", "sequence": {"sequence": "int64"}}, {"name": "edge_attr", "sequence": {"sequence": "int64"}}, {"name": "y", "sequence": "float64"}, {"name": "num_nodes", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 376796456, "num_examples": 220011}, {"name": "test", "num_bytes": 8538528, "num_examples": 5000}, {"name": "validation", "num_bytes": 41819628, "num_examples": 24445}], "download_size": 20636253, "dataset_size": 427154612}} | 2023-02-07T16:37:32+00:00 | [] | [] | TAGS
#task_categories-graph-ml #license-unknown #region-us
| Dataset Card for ZINC
=====================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Repository::
* Paper:: ZINC 15 – Ligand Discovery for Everyone (see citation)
* Leaderboard:: Papers with code leaderboard
### Dataset Summary
The 'ZINC' dataset is a "curated collection of commercially available chemical compounds prepared especially for virtual screening" (Wikipedia).
### Supported Tasks and Leaderboards
'ZINC' should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.
The associated leaderboard is here: Papers with code leaderboard.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset, and follows the provided data splits.
This information can be found back using
Additional Information
----------------------
### Licensing Information
The dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.
### Contributions
Thanks to @clefourrier for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'ZINC' dataset is a \"curated collection of commercially available chemical compounds prepared especially for virtual screening\" (Wikipedia).",
"### Supported Tasks and Leaderboards\n\n\n'ZINC' should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.\n\n\nThe associated leaderboard is here: Papers with code leaderboard.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
"TAGS\n#task_categories-graph-ml #license-unknown #region-us \n",
"### Dataset Summary\n\n\nThe 'ZINC' dataset is a \"curated collection of commercially available chemical compounds prepared especially for virtual screening\" (Wikipedia).",
"### Supported Tasks and Leaderboards\n\n\n'ZINC' should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.\n\n\nThe associated leaderboard is here: Papers with code leaderboard.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
23,
35,
72,
25,
4,
158,
43,
34,
17
] | [
"passage: TAGS\n#task_categories-graph-ml #license-unknown #region-us \n### Dataset Summary\n\n\nThe 'ZINC' dataset is a \"curated collection of commercially available chemical compounds prepared especially for virtual screening\" (Wikipedia).### Supported Tasks and Leaderboards\n\n\n'ZINC' should be used for molecular property prediction (aiming to predict the constrained solubility of the molecules), a graph regression task. The score used is the MAE.\n\n\nThe associated leaderboard is here: Papers with code leaderboard.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under unknown license. Please open an issue if you know what is the license of this dataset.### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] |
af9c040afaaa5902987bfcb3d4256c09239ec8ed |
# Dataset Card for PROTEINS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://academic.oup.com/bioinformatics/article/21/suppl_1/i47/202991)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/PROTEINS.zip):**:
- **Paper:**: Protein function prediction via graph kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-proteins)
### Dataset Summary
The `PROTEINS` dataset is a medium molecular property prediction dataset.
### Supported Tasks and Leaderboards
`PROTEINS` should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1113 |
| average #nodes | 39.06 |
| average #edges | 72.82 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by TUDataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
dataset = TUDataset(root='', name = 'PROTEINS')
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have info about it.
### Citation Information
```
@article{10.1093/bioinformatics/bti1007,
author = {Borgwardt, Karsten M. and Ong, Cheng Soon and Schönauer, Stefan and Vishwanathan, S. V. N. and Smola, Alex J. and Kriegel, Hans-Peter},
title = "{Protein function prediction via graph kernels}",
journal = {Bioinformatics},
volume = {21},
number = {suppl_1},
pages = {i47-i56},
year = {2005},
month = {06},
abstract = "{Motivation: Computational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs.Results: Our graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.Availability: More information available via www.dbs.ifi.lmu.de/Mitarbeiter/borgwardt.html.Contact:[email protected]}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/bti1007},
url = {https://doi.org/10.1093/bioinformatics/bti1007},
eprint = {https://academic.oup.com/bioinformatics/article-pdf/21/suppl\_1/i47/524364/bti1007.pdf},
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/PROTEINS | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T14:50:33+00:00 | {"license": "unknown", "task_categories": ["graph-ml"]} | 2023-02-07T16:39:11+00:00 | [] | [] | TAGS
#task_categories-graph-ml #license-unknown #region-us
| Dataset Card for PROTEINS
=========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Repository::
* Paper:: Protein function prediction via graph kernels (see citation)
* Leaderboard:: Papers with code leaderboard
### Dataset Summary
The 'PROTEINS' dataset is a medium molecular property prediction dataset.
### Supported Tasks and Leaderboards
'PROTEINS' should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by TUDataset.
This information can be found back using
Additional Information
----------------------
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have info about it.
### Contributions
Thanks to @clefourrier for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'PROTEINS' dataset is a medium molecular property prediction dataset.",
"### Supported Tasks and Leaderboards\n\n\n'PROTEINS' should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by TUDataset.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have info about it.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
"TAGS\n#task_categories-graph-ml #license-unknown #region-us \n",
"### Dataset Summary\n\n\nThe 'PROTEINS' dataset is a medium molecular property prediction dataset.",
"### Supported Tasks and Leaderboards\n\n\n'PROTEINS' should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by TUDataset.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have info about it.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
23,
24,
67,
25,
4,
158,
39,
29,
17
] | [
"passage: TAGS\n#task_categories-graph-ml #license-unknown #region-us \n### Dataset Summary\n\n\nThe 'PROTEINS' dataset is a medium molecular property prediction dataset.### Supported Tasks and Leaderboards\n\n\n'PROTEINS' should be used for molecular property prediction (aiming to predict whether molecules are enzymes or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by TUDataset.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have info about it.### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] |
d0d278691a40f1d671294d5f3690a18acf6e0270 |
# Dataset Card for MUTAG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://pubs.acs.org/doi/abs/10.1021/jm00106a046)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/MUTAG.zip):**:
- **Paper:**: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-mutag)
### Dataset Summary
The `MUTAG` dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.
### Supported Tasks and Leaderboards
`MUTAG` should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | small |
| #graphs | 187 |
| average #nodes | 18.03 |
| average #edges | 39.80 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="MUTAG")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have information.
### Citation Information
```
@article{doi:10.1021/jm00106a046,
author = {Debnath, Asim Kumar and Lopez de Compadre, Rosa L. and Debnath, Gargi and Shusterman, Alan J. and Hansch, Corwin},
title = {Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity},
journal = {Journal of Medicinal Chemistry},
volume = {34},
number = {2},
pages = {786-797},
year = {1991},
doi = {10.1021/jm00106a046},
URL = {
https://doi.org/10.1021/jm00106a046
},
eprint = {
https://doi.org/10.1021/jm00106a046
}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/MUTAG | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T14:58:02+00:00 | {"license": "unknown", "task_categories": ["graph-ml"]} | 2023-02-07T16:39:19+00:00 | [] | [] | TAGS
#task_categories-graph-ml #license-unknown #region-us
| Dataset Card for MUTAG
======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Repository::
* Paper:: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity (see citation)
* Leaderboard:: Papers with code leaderboard
### Dataset Summary
The 'MUTAG' dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.
### Supported Tasks and Leaderboards
'MUTAG' should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.
This information can be found back using
Additional Information
----------------------
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have information.
### Contributions
Thanks to @clefourrier for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'MUTAG' dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.",
"### Supported Tasks and Leaderboards\n\n\n'MUTAG' should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have information.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
"TAGS\n#task_categories-graph-ml #license-unknown #region-us \n",
"### Dataset Summary\n\n\nThe 'MUTAG' dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.",
"### Supported Tasks and Leaderboards\n\n\n'MUTAG' should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have information.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
23,
44,
74,
25,
4,
158,
47,
27,
17
] | [
"passage: TAGS\n#task_categories-graph-ml #license-unknown #region-us \n### Dataset Summary\n\n\nThe 'MUTAG' dataset is 'a collection of nitroaromatic compounds and the goal is to predict their mutagenicity on Salmonella typhimurium'.### Supported Tasks and Leaderboards\n\n\n'MUTAG' should be used for molecular property prediction (aiming to predict whether molecules have a mutagenic effect on a given bacterium or not), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset provided by OGB, and follows the provided data splits.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have information.### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] |
412288d7d6a1e6afc381bd89223e0a17c35b4875 |
# Dataset Card for IMDB-BINARY (IMDb-B)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://dl.acm.org/doi/10.1145/2783258.2783417)**
- **[Repository](https://www.chrsmrrs.com/graphkerneldatasets/IMDB-BINARY.zip):**:
- **Paper:**: Deep Graph Kernels (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-imdb-b)
### Dataset Summary
The `IMDb-B` dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
`IMDb-B` should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1000 |
| average #nodes | 19.79 |
| average #edges | 193.25 |
### Data Fields
Each row of a given file is a graph, with:
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
```python
from torch_geometric.datasets import TUDataset
cur_dataset = TUDataset(root="../dataset/loaded/",
name="IMDB-BINARY")
```
## Additional Information
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Citation Information
```
@inproceedings{10.1145/2783258.2783417,
author = {Yanardag, Pinar and Vishwanathan, S.V.N.},
title = {Deep Graph Kernels},
year = {2015},
isbn = {9781450336642},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2783258.2783417},
doi = {10.1145/2783258.2783417},
abstract = {In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.},
booktitle = {Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {1365–1374},
numpages = {10},
keywords = {collaboration networks, bioinformatics, r-convolution kernels, graph kernels, structured data, deep learning, social networks, string kernels},
location = {Sydney, NSW, Australia},
series = {KDD '15}
}
```
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | graphs-datasets/IMDB-BINARY | [
"task_categories:graph-ml",
"license:unknown",
"region:us"
] | 2022-08-01T15:17:25+00:00 | {"license": "unknown", "task_categories": ["graph-ml"]} | 2023-02-07T16:39:00+00:00 | [] | [] | TAGS
#task_categories-graph-ml #license-unknown #region-us
| Dataset Card for IMDB-BINARY (IMDb-B)
=====================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Repository::
* Paper:: Deep Graph Kernels (see citation)
* Leaderboard:: Papers with code leaderboard
### Dataset Summary
The 'IMDb-B' dataset is "a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres".
### Supported Tasks and Leaderboards
'IMDb-B' should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data comes from the PyGeometric version of the dataset.
This information can be found back using
Additional Information
----------------------
### Licensing Information
The dataset has been released under unknown license, please open an issue if you have this information.
### Contributions
Thanks to @clefourrier for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'IMDb-B' dataset is \"a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres\".",
"### Supported Tasks and Leaderboards\n\n\n'IMDb-B' should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have this information.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
"TAGS\n#task_categories-graph-ml #license-unknown #region-us \n",
"### Dataset Summary\n\n\nThe 'IMDb-B' dataset is \"a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres\".",
"### Supported Tasks and Leaderboards\n\n\n'IMDb-B' should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset.\nThis information can be found back using\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have this information.",
"### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] | [
23,
96,
72,
25,
4,
97,
34,
28,
17
] | [
"passage: TAGS\n#task_categories-graph-ml #license-unknown #region-us \n### Dataset Summary\n\n\nThe 'IMDb-B' dataset is \"a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres\".### Supported Tasks and Leaderboards\n\n\n'IMDb-B' should be used for graph classification (aiming to predict whether a movie graph is an action or romance movie), a binary classification task. The score used is accuracy, using a 10-fold cross-validation.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data comes from the PyGeometric version of the dataset.\nThis information can be found back using\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under unknown license, please open an issue if you have this information.### Contributions\n\n\nThanks to @clefourrier for adding this dataset."
] |
9e59fee55eef474310846d06a0fab238602a32d8 |
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
```python
from datasets import load_dataset
ds = load_dataset("bigscience/evaluation-results", "bloom")
```
If it takes too long, it may be faster to clone the repository and load the data from disk:
```python
!git clone https://huggingface.co/datasets/bigscience/evaluation-results
ds = load_dataset("evaluation-results", "bloom")
```
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For `bigsciencelmevalharness`, `lmevalharness` & `codeeval` evaluation_frameworks the structure is:
`model_name > evaluation_framework > checkpoint_type > dataset_name > data`
## Evaluation Procedure
- `bigsciencelmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/291
- https://github.com/bigscience-workshop/lm-evaluation-harness
- `lmevalharness` files were created using the below:
- https://github.com/bigscience-workshop/Megatron-DeepSpeed
- https://github.com/EleutherAI/lm-evaluation-harness
- `codeeval` files were created using the HumanEval code dataset with the below:
- https://github.com/loubnabnl/bloom-code-evaluation
| bigscience/evaluation-results | [
"task_categories:other",
"size_categories:100M<n<1B",
"region:us"
] | 2022-08-01T17:35:58+00:00 | {"size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "evaluation-results"} | 2023-05-27T23:13:53+00:00 | [] | [] | TAGS
#task_categories-other #size_categories-100M<n<1B #region-us
|
# BigScience BLOOM Evaluation Results
This repository contains evaluation results & original predictions of BLOOM & friends.
## Usage
You can load numeric results via:
If it takes too long, it may be faster to clone the repository and load the data from disk:
For example generations (.jsonl files), you need to manually browse the repository.
## Structure
For 'bigsciencelmevalharness', 'lmevalharness' & 'codeeval' evaluation_frameworks the structure is:
'model_name > evaluation_framework > checkpoint_type > dataset_name > data'
## Evaluation Procedure
- 'bigsciencelmevalharness' files were created using the below:
- URL
- URL
- 'lmevalharness' files were created using the below:
- URL
- URL
- 'codeeval' files were created using the HumanEval code dataset with the below:
- URL
| [
"# BigScience BLOOM Evaluation Results\n\n\nThis repository contains evaluation results & original predictions of BLOOM & friends.",
"## Usage\n\nYou can load numeric results via:\n\n\nIf it takes too long, it may be faster to clone the repository and load the data from disk:\n\n\nFor example generations (.jsonl files), you need to manually browse the repository.",
"## Structure\n\nFor 'bigsciencelmevalharness', 'lmevalharness' & 'codeeval' evaluation_frameworks the structure is:\n'model_name > evaluation_framework > checkpoint_type > dataset_name > data'",
"## Evaluation Procedure\n\n- 'bigsciencelmevalharness' files were created using the below:\n - URL\n - URL\n- 'lmevalharness' files were created using the below:\n - URL\n - URL\n- 'codeeval' files were created using the HumanEval code dataset with the below:\n - URL"
] | [
"TAGS\n#task_categories-other #size_categories-100M<n<1B #region-us \n",
"# BigScience BLOOM Evaluation Results\n\n\nThis repository contains evaluation results & original predictions of BLOOM & friends.",
"## Usage\n\nYou can load numeric results via:\n\n\nIf it takes too long, it may be faster to clone the repository and load the data from disk:\n\n\nFor example generations (.jsonl files), you need to manually browse the repository.",
"## Structure\n\nFor 'bigsciencelmevalharness', 'lmevalharness' & 'codeeval' evaluation_frameworks the structure is:\n'model_name > evaluation_framework > checkpoint_type > dataset_name > data'",
"## Evaluation Procedure\n\n- 'bigsciencelmevalharness' files were created using the below:\n - URL\n - URL\n- 'lmevalharness' files were created using the below:\n - URL\n - URL\n- 'codeeval' files were created using the HumanEval code dataset with the below:\n - URL"
] | [
26,
26,
60,
59,
68
] | [
"passage: TAGS\n#task_categories-other #size_categories-100M<n<1B #region-us \n# BigScience BLOOM Evaluation Results\n\n\nThis repository contains evaluation results & original predictions of BLOOM & friends.## Usage\n\nYou can load numeric results via:\n\n\nIf it takes too long, it may be faster to clone the repository and load the data from disk:\n\n\nFor example generations (.jsonl files), you need to manually browse the repository.## Structure\n\nFor 'bigsciencelmevalharness', 'lmevalharness' & 'codeeval' evaluation_frameworks the structure is:\n'model_name > evaluation_framework > checkpoint_type > dataset_name > data'## Evaluation Procedure\n\n- 'bigsciencelmevalharness' files were created using the below:\n - URL\n - URL\n- 'lmevalharness' files were created using the below:\n - URL\n - URL\n- 'codeeval' files were created using the HumanEval code dataset with the below:\n - URL"
] |
00649413018d64c58ab9b9e9008c51c84e3d1919 |
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | BirdL/DALL-E-Cats | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-08-01T19:37:15+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["image-classification", "unconditional-image-generation"], "task_ids": [], "pretty_name": "DALL-E Cats Dataset", "tags": []} | 2022-09-28T20:07:37+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us
|
DALL-E-Cats is a dataset meant to produce a synthetic animal dataset. This is a successor to DALL-E-Dogs. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the BirdL-AirL License. | [] | [
"TAGS\n#task_categories-image-classification #task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us \n"
] | [
49
] | [
"passage: TAGS\n#task_categories-image-classification #task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us \n"
] |
a9f7f1ac75934a7c01d3ca02217544251939c881 | **Pexel Videos**
*358,551 video urls, average length 19.5s, and associated metadata from pexels.com.*
Data was extracted from their video sitemaps (pexels.com/robots.txt) on 01/08/2022.
Data is stored in PexelVideos.parquet.gzip as a gzipped parquet
To get this data ensure you have git installed and do !git lfs clone https://huggingface.co/datasets/Corran/pexelvideos/
In python the reccomended reading is by opening the file with pandas.
!pip install pandas <br>
import pandas <br>
data=pd.read_parquet('PexelVideos.parquet.gzip') <br>
Get a specific url and its metadata using data.iloc[0], read this like a python dict
e.g to get the url for index i run
url= df.iloc[i]["content_loc"]
https://pandas.pydata.org/pandas-docs/version/1.1/getting_started/index.html#getting-started
**Explore this dataset using Open-Clip**
https://colab.research.google.com/drive/1m3_KfPKOC_oivqoruaseiNUlP-_MqqyX#scrollTo=bNngcd8UAOma
**License**
According to Pexels licensing, these videos are free to use for personal or commercial purposes, attribution is polite but not required however,
-Identifiable people may not appear in a bad light or in a way that is offensive. <br>
-Don't sell unaltered copies of a photo or video, e.g. as a poster, print or on a physical product without modifying it first. <br>
-Don't imply endorsement of your product by people or brands on the imagery. <br>
-Don't redistribute or sell the photos and videos on other stock photo or wallpaper platforms. <br>
license https://www.pexels.com/license/
| Corran/pexelvideos | [
"region:us"
] | 2022-08-02T01:57:25+00:00 | {} | 2022-08-08T12:22:04+00:00 | [] | [] | TAGS
#region-us
| Pexel Videos
*358,551 video urls, average length 19.5s, and associated metadata from URL.*
Data was extracted from their video sitemaps (URL on 01/08/2022.
Data is stored in URL as a gzipped parquet
To get this data ensure you have git installed and do !git lfs clone URL
In python the reccomended reading is by opening the file with pandas.
!pip install pandas <br>
import pandas <br>
data=pd.read_parquet('URL') <br>
Get a specific url and its metadata using URL[0], read this like a python dict
e.g to get the url for index i run
url= URL[i]["content_loc"]
URL
Explore this dataset using Open-Clip
URL
License
According to Pexels licensing, these videos are free to use for personal or commercial purposes, attribution is polite but not required however,
-Identifiable people may not appear in a bad light or in a way that is offensive. <br>
-Don't sell unaltered copies of a photo or video, e.g. as a poster, print or on a physical product without modifying it first. <br>
-Don't imply endorsement of your product by people or brands on the imagery. <br>
-Don't redistribute or sell the photos and videos on other stock photo or wallpaper platforms. <br>
license URL
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
923d33d0d849afee9887b1f80e71e686bb5a68af |
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1228646724
- CO2 Emissions (in grams): 1368.8941
## Validation Metrics
- Loss: 2.319
- Rouge1: 43.703
- Rouge2: 16.106
- RougeL: 23.715
- RougeLsum: 38.984
- Gen Len: 141.091
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/vishw2703/autotrain-unisumm_3-1228646724
``` | ShreySavaliya/TextSummarisation | [
"language:unk",
"autotrain",
"summarization",
"region:us"
] | 2022-08-02T05:27:58+00:00 | {"language": ["unk"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain \ud83e\udd17"}], "datasets": ["vishw2703/autotrain-data-unisumm_3"], "co2_eq_emissions": {"emissions": 1368.894142563709}} | 2022-08-17T05:03:10+00:00 | [] | [
"unk"
] | TAGS
#language-Enawené-Nawé #autotrain #summarization #region-us
|
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1228646724
- CO2 Emissions (in grams): 1368.8941
## Validation Metrics
- Loss: 2.319
- Rouge1: 43.703
- Rouge2: 16.106
- RougeL: 23.715
- RougeLsum: 38.984
- Gen Len: 141.091
## Usage
You can use cURL to access this model:
| [
"# Model Trained Using AutoTrain\n\n- Problem type: Summarization\n- Model ID: 1228646724\n- CO2 Emissions (in grams): 1368.8941",
"## Validation Metrics\n\n- Loss: 2.319\n- Rouge1: 43.703\n- Rouge2: 16.106\n- RougeL: 23.715\n- RougeLsum: 38.984\n- Gen Len: 141.091",
"## Usage\n\nYou can use cURL to access this model:"
] | [
"TAGS\n#language-Enawené-Nawé #autotrain #summarization #region-us \n",
"# Model Trained Using AutoTrain\n\n- Problem type: Summarization\n- Model ID: 1228646724\n- CO2 Emissions (in grams): 1368.8941",
"## Validation Metrics\n\n- Loss: 2.319\n- Rouge1: 43.703\n- Rouge2: 16.106\n- RougeL: 23.715\n- RougeLsum: 38.984\n- Gen Len: 141.091",
"## Usage\n\nYou can use cURL to access this model:"
] | [
23,
38,
47,
13
] | [
"passage: TAGS\n#language-Enawené-Nawé #autotrain #summarization #region-us \n# Model Trained Using AutoTrain\n\n- Problem type: Summarization\n- Model ID: 1228646724\n- CO2 Emissions (in grams): 1368.8941## Validation Metrics\n\n- Loss: 2.319\n- Rouge1: 43.703\n- Rouge2: 16.106\n- RougeL: 23.715\n- RougeLsum: 38.984\n- Gen Len: 141.091## Usage\n\nYou can use cURL to access this model:"
] |
7e7d231c127baf5185b7e25b3086591df61c5b07 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-cnn-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185622 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:39:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/mt5-base-cnn-nl", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-02T11:11:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-cnn-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @yhavinga for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/mt5-base-cnn-nl\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/mt5-base-cnn-nl\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
13,
97,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/mt5-base-cnn-nl\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @yhavinga for evaluating this model."
] |
fbc605ed17bc3f3930bce6489c04f4cf3546cf91 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-mixednews-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-612d6c13-12185623 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/mt5-base-mixednews-nl", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-02T11:32:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: yhavinga/mt5-base-mixednews-nl
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @yhavinga for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/mt5-base-mixednews-nl\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/mt5-base-mixednews-nl\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
13,
98,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/mt5-base-mixednews-nl\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @yhavinga for evaluating this model."
] |
19cda222ed39522c3b1b340261a5ba09766d9d4b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-1cd241d3-12195624 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:42:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ceyda for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
13,
92,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ceyda for evaluating this model."
] |
681f907c1bfc909157ce2fb38f101ab336764137 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205625 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-large-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:42:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ceyda for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
13,
95,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ceyda for evaluating this model."
] |
4c021cc32cf68644cdf094a49154425f1089a8ec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205626 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2-distilled", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:34+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ceyda for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
13,
94,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ceyda for evaluating this model."
] |
8b13664c3be80d2efe8e51c4d2f9404d854d9872 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205627 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2-distilled", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ceyda for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
13,
97,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ceyda for evaluating this model."
] |
7af19d4b60ccd712521d35090b9a032bda03374c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205628 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinybert-6l-768d-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ceyda for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
13,
94,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ceyda for evaluating this model."
] |
687b60cfba2df04d63b009179832de2e6b5e2db6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ceyda](https://huggingface.co/ceyda) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205629 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T09:41:55+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ceyda for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ceyda for evaluating this model."
] | [
13,
94,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ceyda for evaluating this model."
] |
39c4d334cad8018816b024476a85c85a11f082c2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
* Dataset: sms_spam
* Config: plain_text
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Al-Ip](https://huggingface.co/Al-Ip) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-sms_spam-216c1ded-12215630 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:40:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sms_spam"], "eval_info": {"task": "binary_classification", "model": "Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection", "metrics": [], "dataset_name": "sms_spam", "dataset_config": "plain_text", "dataset_split": "train", "col_mapping": {"text": "sms", "target": "label"}}} | 2022-08-02T09:41:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection
* Dataset: sms_spam
* Config: plain_text
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Al-Ip for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection\n* Dataset: sms_spam\n* Config: plain_text\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Al-Ip for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection\n* Dataset: sms_spam\n* Config: plain_text\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Al-Ip for evaluating this model."
] | [
13,
108,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: Rhuax/MiniLMv2-L12-H384-distilled-finetuned-spam-detection\n* Dataset: sms_spam\n* Config: plain_text\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Al-Ip for evaluating this model."
] |
6500ed59d1b0764caa2b526bb72c66f097e95f8d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_1
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235635 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:42:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_1", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T10:31:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_1
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_1\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_1\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
105,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_1\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
28e036a2c5176b700ef625b46740702b23034dd1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_2
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235636 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:42:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_2", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T10:29:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_2
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_2\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_2\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
106,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_2\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
25e614252e9ce89fcf8cc4af6e918711cbb3c528 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-54a73f7a-12235637 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:42:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_pubmed_explanatory", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T12:26:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_explanatory\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_explanatory\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
112,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_explanatory\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
61b61341f2e6e3ff845cbb5c2a6a8ecf5f798cc9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255638 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:43:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_large_baseline_pubmed", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T11:01:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
106,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
18d6acb7b5eb51e83b9c02b70eed7f33c76c8075 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-93d67e8f-12255639 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:43:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_baseline_pubmed", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T18:47:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
111,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
4f333c302ff8acf17091c65ea016973bea5b55fd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265641 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:44:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_baseline_pubmed", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T18:53:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
111,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
4d959d3ddcccbcdc6bd5eb9263a0bfe1ac4c21bf | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-3c512f6e-12265640 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:44:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_large_baseline_pubmed", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T11:23:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_large_baseline_pubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
106,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_large_baseline_pubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
47c39cc6f07bdfdb281cfe463ec5fa20b6d51a47 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: cuad
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@halima](https://huggingface.co/halima) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cuad-e5412c0a-12275642 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T09:45:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cuad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "cuad", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-02T10:21:22+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: cuad
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @halima for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: cuad\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @halima for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: cuad\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @halima for evaluating this model."
] | [
13,
96,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: cuad\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @halima for evaluating this model."
] |
2dbc0d5727ee0cfa7704021bc39a9480f8ee1a7d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_3
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335643 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T15:46:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_3", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T16:24:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_3
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_3\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_3\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
106,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_3\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
691cb00d999c35d401985121f2ee489b2b8f5de6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_4
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335644 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T15:46:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_4", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T16:43:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_4
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_4\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_4\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
106,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_4\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
e3fe65be167f5aa4698afaa58d32d3eeaf834c71 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_5
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c8bf564e-12335645 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T15:46:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_pubmed_sumpubmed_5", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T16:55:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_pubmed_sumpubmed_5
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_5\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_5\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
106,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_pubmed_sumpubmed_5\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
42a9884a2e30084417f497d64829ff3d7162492f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-07d54673-12345646 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T17:57:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-08-03T20:34:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
112,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
0761f2c5a7799569a8662dcc39a352206225b43d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-19ae30f1-12355647 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T18:01:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-08-04T02:41:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
102,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
50e25ed78f4fc72fbfca9fe76a910ce67088667e |
This dataset consists of a approx 50k collection of research articles from **PubMed** repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described in the Below Figure.

 | owaiskha9654/PubMed_MultiLabel_Text_Classification_Dataset_MeSH | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"size_categories:10K<n<100K",
"source_datasets:BioASQ Task A",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-08-02T19:13:50+00:00 | {"language": ["en"], "license": "afl-3.0", "size_categories": ["10K<n<100K"], "source_datasets": ["BioASQ Task A"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "BioASQ, PUBMED"} | 2023-01-30T09:50:44+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-label-classification #size_categories-10K<n<100K #source_datasets-BioASQ Task A #language-English #license-afl-3.0 #region-us
|
This dataset consists of a approx 50k collection of research articles from PubMed repository. Originally these documents are manually annotated by Biomedical Experts with their MeSH labels and each articles are described in terms of 10-15 MeSH labels. In this Dataset we have huge numbers of labels present as a MeSH major which is raising the issue of extremely large output space and severe label sparsity issues. To solve this Issue Dataset has been Processed and mapped to its root as Described in the Below Figure.
!Mapped Image not Fetched
!Tree Structure | [] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #size_categories-10K<n<100K #source_datasets-BioASQ Task A #language-English #license-afl-3.0 #region-us \n"
] | [
66
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #size_categories-10K<n<100K #source_datasets-BioASQ Task A #language-English #license-afl-3.0 #region-us \n"
] |
ba2fde998044a29968fa13af93c291be5626bff5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led-large-sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-f53a4404-12415653 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-02T19:16:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led-large-sumpubmed", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-02T21:14:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led-large-sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led-large-sumpubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led-large-sumpubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
104,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led-large-sumpubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
162574e34bf5cd64881b2689909f43b0aa971a0b | # laion2B-multi-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## About dataset
a subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi), including only korean
### Lisence
CC-BY-4.0
## Data Structure
### Data Instance
```py
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2B-multi-korean-subset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'],
num_rows: 11376263
})
})
```
```py
>>> dataset["train"].features
{'SAMPLE_ID': Value(dtype='int64', id=None),
'URL': Value(dtype='string', id=None),
'TEXT': Value(dtype='string', id=None),
'HEIGHT': Value(dtype='int32', id=None),
'WIDTH': Value(dtype='int32', id=None),
'LICENSE': Value(dtype='string', id=None),
'LANGUAGE': Value(dtype='string', id=None),
'NSFW': Value(dtype='string', id=None),
'similarity': Value(dtype='float32', id=None)}
```
### Data Size
download: 1.56 GiB<br>
generated: 2.37 GiB<br>
total: 3.93 GiB
### Data Field
- 'SAMPLE_ID': `int`
- 'URL': `string`
- 'TEXT': `string`
- 'HEIGHT': `int`
- 'WIDTH': `int`
- 'LICENSE': `string`
- 'LANGUAGE': `string`
- 'NSFW': `string`
- 'similarity': `float`
### Data Splits
| | train |
| --------- | -------- |
| # of data | 11376263 |
## Note
### Height, Width
이미지의 가로가 `HEIGHT`로, 세로가 `WIDTH`로 되어있는 것 같습니다.
```pycon
>>> dataset["train"][98]
{'SAMPLE_ID': 2937471001780,
'URL': 'https://image.ajunews.com/content/image/2019/04/12/20190412175643597949.png',
'TEXT': '인천시교육청, 인천 시군구발전협의회 임원진과의 간담회 개최',
'HEIGHT': 640,
'WIDTH': 321,
'LICENSE': '?',
'LANGUAGE': 'ko',
'NSFW': 'UNLIKELY',
'similarity': 0.33347243070602417}
```

### csv file, pandas
```py
# pip install zstandard
import pandas as pd
from huggingface_hub import hf_hub_url
url = hf_hub_url("Bingsu/laion2B-multi-korean-subset", filename="laion2B-multi-korean-subset.csv.zst", repo_type="dataset")
# url = "https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst"
df = pd.read_csv(url)
```
<https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset/resolve/main/laion2B-multi-korean-subset.csv.zst>
778 MB
### Code used to generate
```py
import csv
import re
from datasets import load_dataset
from tqdm import tqdm
pattern = re.compile(r"[가-힣]")
def quote(s: str) -> str:
s = s.replace('"""', "")
return s
def filter_func(example) -> bool:
lang = example.get("LANGUAGE")
text = example.get("TEXT")
if not isinstance(lang, str) or not isinstance(text, str):
return False
return lang == "ko" or pattern.search(text) is not None
file = open("./laion2B-mulit_korean_subset.csv", "w", encoding="utf-8", newline="")
ds = load_dataset("laion/laion2B-multi", split="train", streaming=True)
dsf = ds.filter(filter_func)
header = [
"SAMPLE_ID",
"URL",
"TEXT",
"HEIGHT",
"WIDTH",
"LICENSE",
"LANGUAGE",
"NSFW",
"similarity",
]
writer = csv.DictWriter(file, fieldnames=header)
writer.writeheader()
try:
for data in tqdm(dsf): # total=11378843
data["TEXT"] = quote(data.get("TEXT", ""))
if data["TEXT"]:
writer.writerow(data)
finally:
file.close()
print("Done!")
```
실행에 약 8시간이 소요되었습니다. 이후에 `HEIGHT`나 `WIDTH`가 None인 데이터를 제거하고 업로드하였습니다.
### img2dataset
[img2dataset](https://github.com/rom1504/img2dataset)을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다.
| Bingsu/laion2B-multi-korean-subset | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:ko",
"license:cc-by-4.0",
"region:us"
] | 2022-08-03T05:57:55+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "task_categories": ["feature-extraction"], "pretty_name": "laion2B-multi-korean-subset"} | 2022-10-14T04:23:17+00:00 | [] | [
"ko"
] | TAGS
#task_categories-feature-extraction #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #language-Korean #license-cc-by-4.0 #region-us
| laion2B-multi-korean-subset
===========================
Dataset Description
-------------------
* Homepage: laion-5b
* Huggingface: laion/laion2B-multi
About dataset
-------------
a subset data of laion/laion2B-multi, including only korean
### Lisence
CC-BY-4.0
Data Structure
--------------
### Data Instance
### Data Size
download: 1.56 GiB
generated: 2.37 GiB
total: 3.93 GiB
### Data Field
* 'SAMPLE\_ID': 'int'
* 'URL': 'string'
* 'TEXT': 'string'
* 'HEIGHT': 'int'
* 'WIDTH': 'int'
* 'LICENSE': 'string'
* 'LANGUAGE': 'string'
* 'NSFW': 'string'
* 'similarity': 'float'
### Data Splits
Note
----
### Height, Width
이미지의 가로가 'HEIGHT'로, 세로가 'WIDTH'로 되어있는 것 같습니다.
!image
### csv file, pandas
<URL
778 MB
### Code used to generate
실행에 약 8시간이 소요되었습니다. 이후에 'HEIGHT'나 'WIDTH'가 None인 데이터를 제거하고 업로드하였습니다.
### img2dataset
img2dataset을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다.
| [
"### Lisence\n\n\nCC-BY-4.0\n\n\nData Structure\n--------------",
"### Data Instance",
"### Data Size\n\n\ndownload: 1.56 GiB \n\ngenerated: 2.37 GiB \n\ntotal: 3.93 GiB",
"### Data Field\n\n\n* 'SAMPLE\\_ID': 'int'\n* 'URL': 'string'\n* 'TEXT': 'string'\n* 'HEIGHT': 'int'\n* 'WIDTH': 'int'\n* 'LICENSE': 'string'\n* 'LANGUAGE': 'string'\n* 'NSFW': 'string'\n* 'similarity': 'float'",
"### Data Splits\n\n\n\nNote\n----",
"### Height, Width\n\n\n이미지의 가로가 'HEIGHT'로, 세로가 'WIDTH'로 되어있는 것 같습니다.\n\n\n!image",
"### csv file, pandas\n\n\n<URL\n\n\n778 MB",
"### Code used to generate\n\n\n실행에 약 8시간이 소요되었습니다. 이후에 'HEIGHT'나 'WIDTH'가 None인 데이터를 제거하고 업로드하였습니다.",
"### img2dataset\n\n\nimg2dataset을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다."
] | [
"TAGS\n#task_categories-feature-extraction #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #language-Korean #license-cc-by-4.0 #region-us \n",
"### Lisence\n\n\nCC-BY-4.0\n\n\nData Structure\n--------------",
"### Data Instance",
"### Data Size\n\n\ndownload: 1.56 GiB \n\ngenerated: 2.37 GiB \n\ntotal: 3.93 GiB",
"### Data Field\n\n\n* 'SAMPLE\\_ID': 'int'\n* 'URL': 'string'\n* 'TEXT': 'string'\n* 'HEIGHT': 'int'\n* 'WIDTH': 'int'\n* 'LICENSE': 'string'\n* 'LANGUAGE': 'string'\n* 'NSFW': 'string'\n* 'similarity': 'float'",
"### Data Splits\n\n\n\nNote\n----",
"### Height, Width\n\n\n이미지의 가로가 'HEIGHT'로, 세로가 'WIDTH'로 되어있는 것 같습니다.\n\n\n!image",
"### csv file, pandas\n\n\n<URL\n\n\n778 MB",
"### Code used to generate\n\n\n실행에 약 8시간이 소요되었습니다. 이후에 'HEIGHT'나 'WIDTH'가 None인 데이터를 제거하고 업로드하였습니다.",
"### img2dataset\n\n\nimg2dataset을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다."
] | [
76,
15,
5,
23,
92,
7,
35,
13,
39,
27
] | [
"passage: TAGS\n#task_categories-feature-extraction #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #language-Korean #license-cc-by-4.0 #region-us \n### Lisence\n\n\nCC-BY-4.0\n\n\nData Structure\n--------------### Data Instance### Data Size\n\n\ndownload: 1.56 GiB \n\ngenerated: 2.37 GiB \n\ntotal: 3.93 GiB### Data Field\n\n\n* 'SAMPLE\\_ID': 'int'\n* 'URL': 'string'\n* 'TEXT': 'string'\n* 'HEIGHT': 'int'\n* 'WIDTH': 'int'\n* 'LICENSE': 'string'\n* 'LANGUAGE': 'string'\n* 'NSFW': 'string'\n* 'similarity': 'float'### Data Splits\n\n\n\nNote\n----### Height, Width\n\n\n이미지의 가로가 'HEIGHT'로, 세로가 'WIDTH'로 되어있는 것 같습니다.\n\n\n!image### csv file, pandas\n\n\n<URL\n\n\n778 MB### Code used to generate\n\n\n실행에 약 8시간이 소요되었습니다. 이후에 'HEIGHT'나 'WIDTH'가 None인 데이터를 제거하고 업로드하였습니다.### img2dataset\n\n\nimg2dataset을 사용하여 URL로된 이미지들을 데이터셋 형태로 만들 수 있습니다."
] |
5e4902d05a661db4ce45d0297930102ebb3d4ebf | maydataset | NitishKarra/mayds | [
"region:us"
] | 2022-08-03T06:01:53+00:00 | {} | 2022-08-03T06:02:13+00:00 | [] | [] | TAGS
#region-us
| maydataset | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
9a801afa9c04957bcc709e5a8e298ffd6a660a3e | newwdATAASEt
| NitishKarra/mydsssss | [
"region:us"
] | 2022-08-03T06:05:25+00:00 | {} | 2022-08-03T06:11:43+00:00 | [] | [] | TAGS
#region-us
| newwdATAASEt
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
454f4d08791516ecf455762cea2a931a1e3b2650 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: lvwerra/distilbert-imdb
* Dataset: imdb
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lvwerra](https://huggingface.co/lvwerra) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-imdb-f49f2e4f-12435655 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-03T06:50:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["imdb"], "eval_info": {"task": "binary_classification", "model": "lvwerra/distilbert-imdb", "metrics": [], "dataset_name": "imdb", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-08-03T06:51:43+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: lvwerra/distilbert-imdb
* Dataset: imdb
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lvwerra for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: lvwerra/distilbert-imdb\n* Dataset: imdb\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lvwerra for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: lvwerra/distilbert-imdb\n* Dataset: imdb\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lvwerra for evaluating this model."
] | [
13,
90,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: lvwerra/distilbert-imdb\n* Dataset: imdb\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lvwerra for evaluating this model."
] |
00ebce44a5ccead88cdad67882e7ecc32ae3debd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: lvwerra/distilbert-imdb
* Dataset: imdb
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lvwerra](https://huggingface.co/lvwerra) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-imdb-ed2a920e-12445656 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-03T06:50:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["imdb"], "eval_info": {"task": "binary_classification", "model": "lvwerra/distilbert-imdb", "metrics": [], "dataset_name": "imdb", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-08-03T06:52:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: lvwerra/distilbert-imdb
* Dataset: imdb
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lvwerra for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: lvwerra/distilbert-imdb\n* Dataset: imdb\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lvwerra for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: lvwerra/distilbert-imdb\n* Dataset: imdb\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lvwerra for evaluating this model."
] | [
13,
90,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: lvwerra/distilbert-imdb\n* Dataset: imdb\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lvwerra for evaluating this model."
] |
8aec023a4a4de9d01302f33f9fc1d7331c2ca7ca | ### Dataset Summary
Dataset of satirical news from "Panorama", Russian "The Onion".
### Dataset Format
Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article. | its5Q/panorama | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ru",
"license:unknown",
"news",
"articles",
"newspapers",
"panorama",
"region:us"
] | 2022-08-03T08:04:25+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["ru"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Dataset of satirical news from \"Panorama\", Russian \"The Onion\".", "tags": ["news", "articles", "newspapers", "panorama"]} | 2022-08-05T17:18:10+00:00 | [] | [
"ru"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Russian #license-unknown #news #articles #newspapers #panorama #region-us
| ### Dataset Summary
Dataset of satirical news from "Panorama", Russian "The Onion".
### Dataset Format
Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article. | [
"### Dataset Summary\nDataset of satirical news from \"Panorama\", Russian \"The Onion\".",
"### Dataset Format\nDataset is in JSONLines format, where \"title\" is the article title, and \"body\" are contents of the article."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Russian #license-unknown #news #articles #newspapers #panorama #region-us \n",
"### Dataset Summary\nDataset of satirical news from \"Panorama\", Russian \"The Onion\".",
"### Dataset Format\nDataset is in JSONLines format, where \"title\" is the article title, and \"body\" are contents of the article."
] | [
100,
25,
35
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Russian #license-unknown #news #articles #newspapers #panorama #region-us \n### Dataset Summary\nDataset of satirical news from \"Panorama\", Russian \"The Onion\".### Dataset Format\nDataset is in JSONLines format, where \"title\" is the article title, and \"body\" are contents of the article."
] |
dfc66dcde3cd3a6d09c28da9890115ae6c3e807e | dmartbillsdsd | NitishKarra/Dmart_ds | [
"region:us"
] | 2022-08-03T09:42:05+00:00 | {} | 2022-08-03T09:52:16+00:00 | [] | [] | TAGS
#region-us
| dmartbillsdsd | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
4d2a8a09970f5779705814c8b0aef7e4eed2244d | bus_bills | NitishKarra/Nitishh | [
"region:us"
] | 2022-08-03T13:05:31+00:00 | {} | 2022-08-03T13:13:17+00:00 | [] | [] | TAGS
#region-us
| bus_bills | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
9da612b1bb02c71e04e79758e84bf9f81b9cb93d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-6abc415f-12465657 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-03T13:48:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-large-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-03T13:56:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
13,
93,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
05b0e1d75a024403670f58bdcc28e87d7930d1c1 | invoice bills | NitishKarra/invoioc | [
"region:us"
] | 2022-08-03T14:11:23+00:00 | {} | 2022-08-03T14:20:08+00:00 | [] | [] | TAGS
#region-us
| invoice bills | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
afda465737e77099473336b4caf60b70fe969dcc |
# Dataset Card for Multi-LexSum
## Table of Contents
- [Dataset Card for Multi-LexSum](#dataset-card-for-multi-lexsum)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset](#dataset)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Sheet (Datasheet)](#dataset-sheet-datasheet)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Release History](#release-history)
## Dataset Description
- **Homepage:** https://multilexsum.github.io
- **Repository:** https://github.com/multilexsum/dataset
- **Paper:** https://arxiv.org/abs/2206.10883
<p>
<a href="https://multilexsum.github.io" style="display: inline-block;">
<img src="https://img.shields.io/badge/-homepage-informational.svg?logo=jekyll" title="Multi-LexSum Paper" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
<a href="https://github.com/multilexsum/dataset" style="display: inline-block;">
<img src="https://img.shields.io/badge/-multilexsum-lightgrey.svg?logo=github" title="Multi-LexSum Github Repo" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
<a href="https://arxiv.org/abs/2206.10883" style="display: inline-block;">
<img src="https://img.shields.io/badge/NeurIPS-2022-9cf" title="Multi-LexSum is accepted in NeurIPS 2022" style="margin-top: 0.25rem; margin-bottom: 0.25rem"></a>
</p>
### Talk @ NeurIPS 2022
[](https://youtu.be/C-fwW_ZhkE8)
### Dataset Summary
The Multi-LexSum dataset is a collection of 9,280 such legal case summaries. Multi-LexSum is distinct from other datasets in its **multiple target summaries, each at a different granularity** (ranging from one-sentence “extreme” summaries to multi-paragraph narrations of over five hundred words). It presents a challenging multi-document summarization task given **the long length of the source documents**, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of **expert-authored summaries**: the experts—lawyers and law students—are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.
### Languages
English
## Dataset
### Data Fields
The dataset contains a list of instances (cases); each instance contains the following data:
| Field | Description |
| ------------: | -------------------------------------------------------------------------------: |
| id | `(str)` The case ID |
| sources | `(List[str])` A list of strings for the text extracted from the source documents |
| summary/long | `(str)` The long (multi-paragraph) summary for this case |
| summary/short | `(Optional[str])` The short (one-paragraph) summary for this case |
| summary/tiny | `(Optional[str])` The tiny (one-sentence) summary for this case |
Please check the exemplar usage below for loading the data:
```python
from datasets import load_dataset
multi_lexsum = load_dataset("allenai/multi_lexsum", name="v20230518")
# Download multi_lexsum locally and load it as a Dataset object
example = multi_lexsum["validation"][0] # The first instance of the dev set
example["sources"] # A list of source document text for the case
for sum_len in ["long", "short", "tiny"]:
print(example["summary/" + sum_len]) # Summaries of three lengths
print(example['case_metadata']) # The corresponding metadata for a case in a dict
```
### Data Splits
| | Instances | Source Documents (D) | Long Summaries (L) | Short Summaries (S) | Tiny Summaries (T) | Total Summaries |
| ----------: | --------: | -------------------: | -----------------: | ------------------: | -----------------: | --------------: |
| Train (70%) | 3,177 | 28,557 | 3,177 | 2,210 | 1,130 | 6,517 |
| Test (20%) | 908 | 7,428 | 908 | 616 | 312 | 1,836 |
| Dev (10%) | 454 | 4,134 | 454 | 312 | 161 | 927 |
## Dataset Sheet (Datasheet)
Please check our [dataset sheet](https://multilexsum.github.io/datasheet) for details regarding dataset creation, source data, annotation, and considerations for the usage.
## Additional Information
### Dataset Curators
The dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.
### Licensing Information
The Multi-LexSum dataset is distributed under the [Open Data Commons Attribution License (ODC-By)](https://opendatacommons.org/licenses/by/1-0/).
The case summaries and metadata are licensed under the [Creative Commons Attribution License (CC BY-NC)](https://creativecommons.org/licenses/by-nc/4.0/), and the source documents are already in the public domain.
Commercial users who desire a license for summaries and metadata can contact [[email protected]](mailto:[email protected]), which will allow free use but limit summary re-posting.
The corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.
### Citation Information
```
@article{Shen2022MultiLexSum,
author = {Zejiang Shen and
Kyle Lo and
Lauren Yu and
Nathan Dahlberg and
Margo Schlanger and
Doug Downey},
title = {Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities},
journal = {CoRR},
volume = {abs/2206.10883},
year = {2022},****
url = {https://doi.org/10.48550/arXiv.2206.10883},
doi = {10.48550/arXiv.2206.10883}
}
```
## Release History
| Version | Description |
| ----------: | -----------------------------------------------------------: |
| `v20230518` | The v1.1 release including case and source document metadata |
| `v20220616` | The initial v1.0 release | | allenai/multi_lexsum | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:odc-by",
"arxiv:2206.10883",
"region:us"
] | 2022-08-03T14:51:10+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["odc-by"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "Multi-LexSum", "tags": []} | 2023-05-18T20:41:22+00:00 | [
"2206.10883"
] | [
"en"
] | TAGS
#task_categories-summarization #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-10K<n<100K #source_datasets-original #language-English #license-odc-by #arxiv-2206.10883 #region-us
| Dataset Card for Multi-LexSum
=============================
Table of Contents
-----------------
* Dataset Card for Multi-LexSum
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Languages
+ Dataset
- Data Fields
- Data Splits
+ Dataset Sheet (Datasheet)
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
+ Release History
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
[](URL)
[](URL style=)
[](URL style=)
### Talk @ NeurIPS 2022
. It presents a challenging multi-document summarization task given the long length of the source documents, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of expert-authored summaries: the experts—lawyers and law students—are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.
### Languages
English
Dataset
-------
### Data Fields
The dataset contains a list of instances (cases); each instance contains the following data:
Please check the exemplar usage below for loading the data:
### Data Splits
Dataset Sheet (Datasheet)
-------------------------
Please check our dataset sheet for details regarding dataset creation, source data, annotation, and considerations for the usage.
Additional Information
----------------------
### Dataset Curators
The dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.
### Licensing Information
The Multi-LexSum dataset is distributed under the Open Data Commons Attribution License (ODC-By).
The case summaries and metadata are licensed under the Creative Commons Attribution License (CC BY-NC), and the source documents are already in the public domain.
Commercial users who desire a license for summaries and metadata can contact info@URL, which will allow free use but limit summary re-posting.
The corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.
Release History
---------------
| [
"### Talk @ NeurIPS 2022\n\n\n. It presents a challenging multi-document summarization task given the long length of the source documents, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of expert-authored summaries: the experts—lawyers and law students—are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.",
"### Languages\n\n\nEnglish\n\n\nDataset\n-------",
"### Data Fields\n\n\nThe dataset contains a list of instances (cases); each instance contains the following data:\n\n\n\nPlease check the exemplar usage below for loading the data:",
"### Data Splits\n\n\n\nDataset Sheet (Datasheet)\n-------------------------\n\n\nPlease check our dataset sheet for details regarding dataset creation, source data, annotation, and considerations for the usage.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.",
"### Licensing Information\n\n\nThe Multi-LexSum dataset is distributed under the Open Data Commons Attribution License (ODC-By).\nThe case summaries and metadata are licensed under the Creative Commons Attribution License (CC BY-NC), and the source documents are already in the public domain.\nCommercial users who desire a license for summaries and metadata can contact info@URL, which will allow free use but limit summary re-posting.\nThe corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.\n\n\nRelease History\n---------------"
] | [
"TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-10K<n<100K #source_datasets-original #language-English #license-odc-by #arxiv-2206.10883 #region-us \n",
"### Talk @ NeurIPS 2022\n\n\n. It presents a challenging multi-document summarization task given the long length of the source documents, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of expert-authored summaries: the experts—lawyers and law students—are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.",
"### Languages\n\n\nEnglish\n\n\nDataset\n-------",
"### Data Fields\n\n\nThe dataset contains a list of instances (cases); each instance contains the following data:\n\n\n\nPlease check the exemplar usage below for loading the data:",
"### Data Splits\n\n\n\nDataset Sheet (Datasheet)\n-------------------------\n\n\nPlease check our dataset sheet for details regarding dataset creation, source data, annotation, and considerations for the usage.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation.",
"### Licensing Information\n\n\nThe Multi-LexSum dataset is distributed under the Open Data Commons Attribution License (ODC-By).\nThe case summaries and metadata are licensed under the Creative Commons Attribution License (CC BY-NC), and the source documents are already in the public domain.\nCommercial users who desire a license for summaries and metadata can contact info@URL, which will allow free use but limit summary re-posting.\nThe corresponding code for downloading and loading the dataset is licensed under the Apache License 2.0.\n\n\nRelease History\n---------------"
] | [
97,
17,
182,
9,
39,
50,
69,
125
] | [
"passage: TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-10K<n<100K #source_datasets-original #language-English #license-odc-by #arxiv-2206.10883 #region-us \n### Talk @ NeurIPS 2022\n\n\n. It presents a challenging multi-document summarization task given the long length of the source documents, often exceeding two hundred pages per case. Unlike other summarization datasets that are (semi-)automatically curated, Multi-LexSum consists of expert-authored summaries: the experts—lawyers and law students—are trained to follow carefully created guidelines, and their work is reviewed by an additional expert to ensure quality.### Languages\n\n\nEnglish\n\n\nDataset\n-------### Data Fields\n\n\nThe dataset contains a list of instances (cases); each instance contains the following data:\n\n\n\nPlease check the exemplar usage below for loading the data:### Data Splits\n\n\n\nDataset Sheet (Datasheet)\n-------------------------\n\n\nPlease check our dataset sheet for details regarding dataset creation, source data, annotation, and considerations for the usage.\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nThe dataset is created by the collaboration between Civil Rights Litigation Clearinghouse (CRLC, from University of Michigan) and Allen Institute for AI. Multi-LexSum builds on the dataset used and posted by the Clearinghouse to inform the public about civil rights litigation."
] |
f5e8e0a268c18fa828f2ba41ea459bfeb8ceb12e |
# Dataset Card for filtered_cuad
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team]([email protected])
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.
We wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 5442 | 936 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 8
Agency Agreement: 8
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 6
Consulting Agreement: 11
Development Agreement: 28
Distributor Agreement: 23
Endorsement Agreement: 10
Franchise Agreement: 14
Hosting Agreement: 12
IP Agreement: 16
Joint Venture Agreemen: 22
License Agreement: 32
Maintenance Agreement: 24
Manufacturing Agreement: 6
Marketing Agreement: 16
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 12
Promotion Agreement: 9
Reseller Agreement: 12
Service Agreement: 24
Sponsorship Agreement: 17
Supply Agreement: 13
Strategic Alliance Agreement: 32
Transportation Agreement: 1
TOTAL: 385
Categories
Document Name
Parties
Agreement Date
Effective Date
Expiration Date
Renewal Term
Notice Period To Terminate Renewal
Governing Law
Non-Compete
Exclusivity
Change Of Control
Anti-Assignment
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to [email protected]. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. | alex-apostolo/filtered-cuad | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cuad",
"language:en",
"license:cc-by-4.0",
"arxiv:2103.06268",
"region:us"
] | 2022-08-03T14:59:24+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["cuad"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa", "extractive-qa"], "paperswithcode_id": "cuad", "pretty_name": "CUAD", "train-eval-index": [{"config": "default", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "cuad", "name": "CUAD"}]}]} | 2022-08-04T05:24:04+00:00 | [
"2103.06268"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-cuad #language-English #license-cc-by-4.0 #arxiv-2103.06268 #region-us
| Dataset Card for filtered\_cuad
===============================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: Contract Understanding Atticus Dataset
* Repository: Contract Understanding Atticus Dataset
* Paper: CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review
* Point of Contact: Atticus Project Team
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.
We wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL
### Supported Tasks and Leaderboards
### Languages
The dataset contains samples in English only.
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Data Fields
* 'id': a 'string' feature.
* 'title': a 'string' feature.
* 'context': a 'string' feature.
* 'question': a 'string' feature.
* 'answers': a dictionary feature containing:
+ 'text': a 'string' feature.
+ 'answer\_start': a 'int32' feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
Train: CUAD, Test: 5442
Dataset Creation
----------------
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
```
Affiliate Agreement: 8
Agency Agreement: 8
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 6
Consulting Agreement: 11
Development Agreement: 28
Distributor Agreement: 23
Endorsement Agreement: 10
Franchise Agreement: 14
Hosting Agreement: 12
IP Agreement: 16
Joint Venture Agreemen: 22
License Agreement: 32
Maintenance Agreement: 24
Manufacturing Agreement: 6
Marketing Agreement: 16
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 12
Promotion Agreement: 9
Reseller Agreement: 12
Service Agreement: 24
Sponsorship Agreement: 17
Supply Agreement: 13
Strategic Alliance Agreement: 32
Transportation Agreement: 1
TOTAL: 385
```
Categories
```
Document Name
Parties
Agreement Date
Effective Date
Expiration Date
Renewal Term
Notice Period To Terminate Renewal
Governing Law
Non-Compete
Exclusivity
Change Of Control
Anti-Assignment
```
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [\* \* \*] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@URL. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy URL and disclaimer URL
### Contributions
Thanks to @bhavitvyamalik for adding this dataset.
| [
"### Dataset Summary\n\n\nContract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.\n\n\nWe wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.\n\n\nCUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits\n\n\nThis dataset is split into train/test set. Number of samples in each set is given below:\n\n\nTrain: CUAD, Test: 5442\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nA highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.\n\n\nContract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.\n\n\nTo reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.\n\n\nType of Contracts: # of Docs\n\n\n\n```\nAffiliate Agreement:\t\t8\nAgency Agreement:\t\t 8 \nCollaboration/Cooperation Agreement: 26\nCo-Branding Agreement:\t\t6\nConsulting Agreement:\t\t11\nDevelopment Agreement:\t\t28\nDistributor Agreement:\t\t23\nEndorsement Agreement:\t\t10\nFranchise Agreement:\t\t14\nHosting Agreement:\t\t12\nIP Agreement:\t\t\t16\nJoint Venture Agreemen:\t\t22\nLicense Agreement:\t\t32\nMaintenance Agreement:\t\t24\nManufacturing Agreement:\t6\nMarketing Agreement:\t\t16\nNon-Compete/No-Solicit/Non-Disparagement Agreement: 3\nOutsourcing Agreement:\t\t12\nPromotion Agreement:\t\t9\nReseller Agreement:\t\t12\nService Agreement:\t\t24\nSponsorship Agreement:\t17\nSupply Agreement:\t\t13\nStrategic Alliance Agreement:\t32\nTransportation Agreement:\t1\nTOTAL:\t\t\t\t385\n\n```\n\nCategories\n\n\n\n```\nDocument Name\nParties\nAgreement Date\nEffective Date\nExpiration Date\nRenewal Term\nNotice Period To Terminate Renewal\nGoverning Law\nNon-Compete\nExclusivity\nChange Of Control\nAnti-Assignment\n\n```",
"#### Who are the source language producers?\n\n\nThe contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.",
"### Annotations",
"#### Annotation process\n\n\nThe labeling process included multiple steps to ensure accuracy:\n\n\n1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.\n2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.\n3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.\n4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.\n5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.\n6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.\n7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.",
"#### Who are the annotators?\n\n\nAnswered in above section.",
"### Personal and Sensitive Information\n\n\nSome clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\\*\\*\\*) or underscores (\\_\\_\\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \\_\\_ 2020” would be “1/[]/2020”).\n\n\nFor any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.\n\n\nFor the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.\n\n\nSome sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:\n\n\nTHIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [\\* \\* \\*] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.\n\n\nSome sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.\n\n\nTo address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol \"\" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol \"”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 This Agreement is effective as of the date written above.”\n\n\nBecause the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nAttorney Advisors\nWei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu\n\n\nLaw Student Leaders\nJohn Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran\n\n\nLaw Student Contributors\nScott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin\n\n\nTechnical Advisors & Contributors\nDan Hendrycks, Collin Burns, Spencer Ball, Anya Chen",
"### Licensing Information\n\n\nCUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.\n\n\nThe creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.\nPrivacy Policy & Disclaimers\n\n\nThe categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@URL. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.\n\n\nThe use of CUAD is subject to their privacy policy URL and disclaimer URL",
"### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-cuad #language-English #license-cc-by-4.0 #arxiv-2103.06268 #region-us \n",
"### Dataset Summary\n\n\nContract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.\n\n\nWe wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.\n\n\nCUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits\n\n\nThis dataset is split into train/test set. Number of samples in each set is given below:\n\n\nTrain: CUAD, Test: 5442\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nA highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.\n\n\nContract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.\n\n\nTo reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.\n\n\nType of Contracts: # of Docs\n\n\n\n```\nAffiliate Agreement:\t\t8\nAgency Agreement:\t\t 8 \nCollaboration/Cooperation Agreement: 26\nCo-Branding Agreement:\t\t6\nConsulting Agreement:\t\t11\nDevelopment Agreement:\t\t28\nDistributor Agreement:\t\t23\nEndorsement Agreement:\t\t10\nFranchise Agreement:\t\t14\nHosting Agreement:\t\t12\nIP Agreement:\t\t\t16\nJoint Venture Agreemen:\t\t22\nLicense Agreement:\t\t32\nMaintenance Agreement:\t\t24\nManufacturing Agreement:\t6\nMarketing Agreement:\t\t16\nNon-Compete/No-Solicit/Non-Disparagement Agreement: 3\nOutsourcing Agreement:\t\t12\nPromotion Agreement:\t\t9\nReseller Agreement:\t\t12\nService Agreement:\t\t24\nSponsorship Agreement:\t17\nSupply Agreement:\t\t13\nStrategic Alliance Agreement:\t32\nTransportation Agreement:\t1\nTOTAL:\t\t\t\t385\n\n```\n\nCategories\n\n\n\n```\nDocument Name\nParties\nAgreement Date\nEffective Date\nExpiration Date\nRenewal Term\nNotice Period To Terminate Renewal\nGoverning Law\nNon-Compete\nExclusivity\nChange Of Control\nAnti-Assignment\n\n```",
"#### Who are the source language producers?\n\n\nThe contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.",
"### Annotations",
"#### Annotation process\n\n\nThe labeling process included multiple steps to ensure accuracy:\n\n\n1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.\n2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.\n3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.\n4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.\n5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.\n6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.\n7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.",
"#### Who are the annotators?\n\n\nAnswered in above section.",
"### Personal and Sensitive Information\n\n\nSome clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\\*\\*\\*) or underscores (\\_\\_\\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \\_\\_ 2020” would be “1/[]/2020”).\n\n\nFor any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.\n\n\nFor the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.\n\n\nSome sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:\n\n\nTHIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [\\* \\* \\*] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.\n\n\nSome sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.\n\n\nTo address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol \"\" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol \"”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 This Agreement is effective as of the date written above.”\n\n\nBecause the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nAttorney Advisors\nWei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu\n\n\nLaw Student Leaders\nJohn Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran\n\n\nLaw Student Contributors\nScott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin\n\n\nTechnical Advisors & Contributors\nDan Hendrycks, Collin Burns, Spencer Ball, Anya Chen",
"### Licensing Information\n\n\nCUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.\n\n\nThe creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.\nPrivacy Policy & Disclaimers\n\n\nThe categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@URL. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.\n\n\nThe use of CUAD is subject to their privacy policy URL and disclaimer URL",
"### Contributions\n\n\nThanks to @bhavitvyamalik for adding this dataset."
] | [
114,
244,
10,
22,
18,
92,
42,
448,
4,
273,
109,
5,
387,
15,
733,
7,
8,
14,
218,
167,
19
] | [
"passage: TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-cuad #language-English #license-cc-by-4.0 #arxiv-2103.06268 #region-us \n### Dataset Summary\n\n\nContract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.\n\n\nWe wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.\n\n\nCUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at URL Code for replicating the results and the trained model can be found at URL### Supported Tasks and Leaderboards### Languages\n\n\nThe dataset contains samples in English only.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"passage: ### Data Splits\n\n\nThis dataset is split into train/test set. Number of samples in each set is given below:\n\n\nTrain: CUAD, Test: 5442\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nA highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.\n\n\nContract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.\n\n\nTo reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.### Source Data",
"passage: #### Initial Data Collection and Normalization\n\n\nThe CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.\n\n\nType of Contracts: # of Docs\n\n\n\n```\nAffiliate Agreement:\t\t8\nAgency Agreement:\t\t 8 \nCollaboration/Cooperation Agreement: 26\nCo-Branding Agreement:\t\t6\nConsulting Agreement:\t\t11\nDevelopment Agreement:\t\t28\nDistributor Agreement:\t\t23\nEndorsement Agreement:\t\t10\nFranchise Agreement:\t\t14\nHosting Agreement:\t\t12\nIP Agreement:\t\t\t16\nJoint Venture Agreemen:\t\t22\nLicense Agreement:\t\t32\nMaintenance Agreement:\t\t24\nManufacturing Agreement:\t6\nMarketing Agreement:\t\t16\nNon-Compete/No-Solicit/Non-Disparagement Agreement: 3\nOutsourcing Agreement:\t\t12\nPromotion Agreement:\t\t9\nReseller Agreement:\t\t12\nService Agreement:\t\t24\nSponsorship Agreement:\t17\nSupply Agreement:\t\t13\nStrategic Alliance Agreement:\t32\nTransportation Agreement:\t1\nTOTAL:\t\t\t\t385\n\n```\n\nCategories\n\n\n\n```\nDocument Name\nParties\nAgreement Date\nEffective Date\nExpiration Date\nRenewal Term\nNotice Period To Terminate Renewal\nGoverning Law\nNon-Compete\nExclusivity\nChange Of Control\nAnti-Assignment\n\n```#### Who are the source language producers?\n\n\nThe contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at URL Please read the Datasheet at URL for information on the intended use and limitations of the CUAD.### Annotations#### Annotation process\n\n\nThe labeling process included multiple steps to ensure accuracy:\n\n\n1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.\n2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.\n3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.\n4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.\n5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.\n6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.\n7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.",
"passage: #### Who are the annotators?\n\n\nAnswered in above section."
] |
eb29fab27c5ca7b37d973b117f82ae60bedb1bad | # AutoTrain Dataset for project: sample-diabetes-predict
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project sample-diabetes-predict.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 0,
"feat_HighBP": 0.0,
"feat_HighChol": 0.0,
"feat_CholCheck": 1.0,
"feat_BMI": 34.0,
"feat_Smoker": 1.0,
"feat_Stroke": 0.0,
"feat_HeartDiseaseorAttack": 0.0,
"feat_PhysActivity": 1.0,
"feat_Fruits": 1.0,
"feat_Veggies": 1.0,
"feat_HvyAlcoholConsump": 0.0,
"feat_AnyHealthcare": 1.0,
"feat_NoDocbcCost": 0.0,
"feat_GenHlth": 3.0,
"feat_MentHlth": 0.0,
"feat_PhysHlth": 0.0,
"feat_DiffWalk": 0.0,
"feat_Sex": 0.0,
"feat_Age": 6.0,
"feat_Education": 6.0,
"feat_Income": 7.0
},
{
"target": 1,
"feat_HighBP": 0.0,
"feat_HighChol": 0.0,
"feat_CholCheck": 1.0,
"feat_BMI": 46.0,
"feat_Smoker": 1.0,
"feat_Stroke": 0.0,
"feat_HeartDiseaseorAttack": 0.0,
"feat_PhysActivity": 1.0,
"feat_Fruits": 1.0,
"feat_Veggies": 1.0,
"feat_HvyAlcoholConsump": 0.0,
"feat_AnyHealthcare": 1.0,
"feat_NoDocbcCost": 0.0,
"feat_GenHlth": 2.0,
"feat_MentHlth": 1.0,
"feat_PhysHlth": 0.0,
"feat_DiffWalk": 0.0,
"feat_Sex": 1.0,
"feat_Age": 10.0,
"feat_Education": 6.0,
"feat_Income": 5.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=2, names=['0.0', '1.0'], id=None)",
"feat_HighBP": "Value(dtype='float64', id=None)",
"feat_HighChol": "Value(dtype='float64', id=None)",
"feat_CholCheck": "Value(dtype='float64', id=None)",
"feat_BMI": "Value(dtype='float64', id=None)",
"feat_Smoker": "Value(dtype='float64', id=None)",
"feat_Stroke": "Value(dtype='float64', id=None)",
"feat_HeartDiseaseorAttack": "Value(dtype='float64', id=None)",
"feat_PhysActivity": "Value(dtype='float64', id=None)",
"feat_Fruits": "Value(dtype='float64', id=None)",
"feat_Veggies": "Value(dtype='float64', id=None)",
"feat_HvyAlcoholConsump": "Value(dtype='float64', id=None)",
"feat_AnyHealthcare": "Value(dtype='float64', id=None)",
"feat_NoDocbcCost": "Value(dtype='float64', id=None)",
"feat_GenHlth": "Value(dtype='float64', id=None)",
"feat_MentHlth": "Value(dtype='float64', id=None)",
"feat_PhysHlth": "Value(dtype='float64', id=None)",
"feat_DiffWalk": "Value(dtype='float64', id=None)",
"feat_Sex": "Value(dtype='float64', id=None)",
"feat_Age": "Value(dtype='float64', id=None)",
"feat_Education": "Value(dtype='float64', id=None)",
"feat_Income": "Value(dtype='float64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 56552 |
| valid | 14140 |
| Plashkar/diabetes-predict-db | [
"region:us"
] | 2022-08-03T15:19:11+00:00 | {} | 2022-08-03T15:22:23+00:00 | [] | [] | TAGS
#region-us
| AutoTrain Dataset for project: sample-diabetes-predict
======================================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project sample-diabetes-predict.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
6,
27,
17,
23,
27
] | [
"passage: TAGS\n#region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
88c8874b5003f5defca3f2aad8031d5925ac3f8c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/long-t5-tglobal-small-dutch-cnn
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-bfaf23ee-12505670 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-03T18:33:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/long-t5-tglobal-small-dutch-cnn", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-03T20:16:04+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: yhavinga/long-t5-tglobal-small-dutch-cnn
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @yhavinga for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/long-t5-tglobal-small-dutch-cnn\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/long-t5-tglobal-small-dutch-cnn\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
13,
103,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/long-t5-tglobal-small-dutch-cnn\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @yhavinga for evaluating this model."
] |
30cebad0e823eb4ab1becef422f44931b4da5b7e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: ben-yu/ms2_combined
* Config: ben-yu--ms2_combined
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ben-yu](https://huggingface.co/ben-yu) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ben-yu__ms2_combined-823f066f-12515671 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-04T02:44:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ben-yu/ms2_combined"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_pubmed_explanatory", "metrics": [], "dataset_name": "ben-yu/ms2_combined", "dataset_config": "ben-yu--ms2_combined", "dataset_split": "train", "col_mapping": {"text": "Abstract", "target": "Target"}}} | 2022-08-04T19:56:42+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_explanatory
* Dataset: ben-yu/ms2_combined
* Config: ben-yu--ms2_combined
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ben-yu for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_explanatory\n* Dataset: ben-yu/ms2_combined\n* Config: ben-yu--ms2_combined\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ben-yu for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_explanatory\n* Dataset: ben-yu/ms2_combined\n* Config: ben-yu--ms2_combined\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ben-yu for evaluating this model."
] | [
13,
114,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_explanatory\n* Dataset: ben-yu/ms2_combined\n* Config: ben-yu--ms2_combined\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ben-yu for evaluating this model."
] |
bd3854b9bb621424168dbfa48790db5385bc7a65 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-99725515-12535673 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-04T03:35:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-08-04T04:08:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
103,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP10\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
cf293f7e3c683acacd14e082b236f5145eb3f85e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-d2b9e56c-12525674 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-04T03:36:46+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-08-05T09:19:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
101,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
aaa27bc25d2c67b9870d4a6390ba6cbd30a7e558 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Jacobsith/autotrain-Hello_there-1209845735
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jacobsith](https://huggingface.co/Jacobsith) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-d94a9931-12545675 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-04T08:21:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Jacobsith/autotrain-Hello_there-1209845735", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-04T14:28:56+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Jacobsith/autotrain-Hello_there-1209845735
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Jacobsith for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Jacobsith/autotrain-Hello_there-1209845735\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Jacobsith for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Jacobsith/autotrain-Hello_there-1209845735\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Jacobsith for evaluating this model."
] | [
13,
109,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Jacobsith/autotrain-Hello_there-1209845735\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Jacobsith for evaluating this model."
] |
8e38dfd0ff467955d47a0369af72b2a536a2e3c4 |
annotations_creators:
- found
language:
- Russian
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: trans_dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- trans
- piska
task_categories:
- text-classification
task_ids:
- multi-class-classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| merkalo-ziri/trans_dataset | [
"region:us"
] | 2022-08-04T09:14:45+00:00 | {} | 2022-08-04T09:29:44+00:00 | [] | [] | TAGS
#region-us
|
annotations_creators:
- found
language:
- Russian
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: trans_dataset
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- trans
- piska
task_categories:
- text-classification
task_ids:
- multi-class-classification
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
6,
125,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
19
] | [
"passage: TAGS\n#region-us \n## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @github-username for adding this dataset."
] |
24b1ead0d5b681ab6350c049a5b6720bfddf384c | # Dataset Card for WITS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** https://github.com/slvcsl/WITS
- **Paper:** http://ceur-ws.org/Vol-3033/paper65.pdf
### Dataset Summary
WITS (Wikipedia for Italian Text Summarization) is a large-scale dataset for abstractive summarization in Italian, built exploiting Wikipedia articles' structure. WITS contains almost 700,000 Wikipedia articles, together with their human-written summaries.
Compared to existing data for text summarization in Italian, WITS is more than an order of magnitude larger and more challenging, given its lengthy sources.
### Languages
The dataset is in Italian.
### Licensing Information
The dataset uses text from Wikipedia. Please refer to Wikipedia's license.
### Citation Information
If you use the dataset, please cite:
```
@inproceedings{DBLP:conf/clic-it/CasolaL21,
author={Silvia Casola and Alberto Lavelli},
title={WITS: Wikipedia for Italian Text Summarization},
year={2021},
cdate={1609459200000},
url={http://ceur-ws.org/Vol-3033/paper65.pdf},
booktitle={CLiC-it},
crossref={conf/clic-it/2021}
}
``` | silvia-casola/WITS | [
"region:us"
] | 2022-08-04T10:57:13+00:00 | {} | 2022-08-04T12:33:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for WITS
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Repository: URL
- Paper: URL
### Dataset Summary
WITS (Wikipedia for Italian Text Summarization) is a large-scale dataset for abstractive summarization in Italian, built exploiting Wikipedia articles' structure. WITS contains almost 700,000 Wikipedia articles, together with their human-written summaries.
Compared to existing data for text summarization in Italian, WITS is more than an order of magnitude larger and more challenging, given its lengthy sources.
### Languages
The dataset is in Italian.
### Licensing Information
The dataset uses text from Wikipedia. Please refer to Wikipedia's license.
If you use the dataset, please cite:
| [
"# Dataset Card for WITS",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Repository: URL\n- Paper: URL",
"### Dataset Summary\n\nWITS (Wikipedia for Italian Text Summarization) is a large-scale dataset for abstractive summarization in Italian, built exploiting Wikipedia articles' structure. WITS contains almost 700,000 Wikipedia articles, together with their human-written summaries.\n\nCompared to existing data for text summarization in Italian, WITS is more than an order of magnitude larger and more challenging, given its lengthy sources.",
"### Languages\n\nThe dataset is in Italian.",
"### Licensing Information\n\nThe dataset uses text from Wikipedia. Please refer to Wikipedia's license.\n\n\n\nIf you use the dataset, please cite:"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for WITS",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Repository: URL\n- Paper: URL",
"### Dataset Summary\n\nWITS (Wikipedia for Italian Text Summarization) is a large-scale dataset for abstractive summarization in Italian, built exploiting Wikipedia articles' structure. WITS contains almost 700,000 Wikipedia articles, together with their human-written summaries.\n\nCompared to existing data for text summarization in Italian, WITS is more than an order of magnitude larger and more challenging, given its lengthy sources.",
"### Languages\n\nThe dataset is in Italian.",
"### Licensing Information\n\nThe dataset uses text from Wikipedia. Please refer to Wikipedia's license.\n\n\n\nIf you use the dataset, please cite:"
] | [
6,
7,
31,
14,
97,
11,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for WITS## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Additional Information\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Repository: URL\n- Paper: URL### Dataset Summary\n\nWITS (Wikipedia for Italian Text Summarization) is a large-scale dataset for abstractive summarization in Italian, built exploiting Wikipedia articles' structure. WITS contains almost 700,000 Wikipedia articles, together with their human-written summaries.\n\nCompared to existing data for text summarization in Italian, WITS is more than an order of magnitude larger and more challenging, given its lengthy sources.### Languages\n\nThe dataset is in Italian.### Licensing Information\n\nThe dataset uses text from Wikipedia. Please refer to Wikipedia's license.\n\n\n\nIf you use the dataset, please cite:"
] |
7b384a7d95575d3c128031a3a3654d21f4528c18 |
# Dataset Card for severo/embellishments
Test: link to a space:
https://huggingface.co/spaces/severo/voronoi-cloth
https://severo-voronoi-cloth.hf.space
## Dataset Description
- **Homepage:** [Digitised Books - Images identified as Embellishments - Homepage](https://bl.iro.bl.uk/concern/datasets/59d1aa35-c2d7-46e5-9475-9d0cd8df721e)
- **Point of Contact:** [Sylvain Lesage](mailto:[email protected])
### Dataset Summary
This small dataset contains the thumbnails of the first 100 entries of [Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG](https://bl.iro.bl.uk/concern/datasets/59d1aa35-c2d7-46e5-9475-9d0cd8df721e). It has been uploaded to the Hub to reproduce the tutorial by Daniel van Strien: [Using 🤗 datasets for image search](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html).
## Dataset Structure
### Data Instances
A typical row contains an image thumbnail, its filename, and the year of publication of the book it was extracted from.
An example looks as follows:
```
{
'fname': '000811462_05_000205_1_The Pictorial History of England being a history of the people as well as a hi_1855.jpg',
'year': '1855',
'path': 'embellishments/1855/000811462_05_000205_1_The Pictorial History of England being a history of the people as well as a hi_1855.jpg',
'img': ...
}
```
### Data Fields
- `fname`: the image filename.
- `year`: a string with the year of publication of the book from which the image has been extracted
- `path`: local path to the image
- `img`: a thumbnail of the image with a max height and width of 224 pixels
### Data Splits
The dataset only contains 100 rows, in a single 'train' split.
## Dataset Creation
### Curation Rationale
This dataset was chosen by Daniel van Strien for his tutorial [Using 🤗 datasets for image search](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html), which includes the code in Python to do it.
### Source Data
#### Initial Data Collection and Normalization
As stated on the British Library webpage:
> The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.d BCP-47 code is `en`.
#### Who are the source data producers?
British Library, British Library Labs, Adrian Edwards (Curator), Neil Fitzgerald (Contributor ORCID)
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
This is a toy dataset that aims at:
- validating the process described in the tutorial [Using 🤗 datasets for image search](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html) by Daniel van Strien,
- showing the [dataset viewer](https://huggingface.co/datasets/severo/embellishments/viewer/severo--embellishments/train) on an image dataset.
## Additional Information
### Dataset Curators
The dataset was created by Sylvain Lesage at Hugging Face, to replicate the tutorial [Using 🤗 datasets for image search](https://danielvanstrien.xyz/metadata/deployment/huggingface/ethics/huggingface-datasets/faiss/2022/01/13/image_search.html) by Daniel van Strien.
### Licensing Information
CC0 1.0 Universal Public Domain
| severo/dummy_public_renamed | [
"annotations_creators:no-annotation",
"size_categories:n<1K",
"source_datasets:original",
"license:cc0-1.0",
"region:us"
] | 2022-08-04T13:25:55+00:00 | {"annotations_creators": ["no-annotation"], "license": "cc0-1.0", "size_categories": ["n<1K"], "source_datasets": ["original"], "pretty_name": "Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG"} | 2023-10-04T08:25:29+00:00 | [] | [] | TAGS
#annotations_creators-no-annotation #size_categories-n<1K #source_datasets-original #license-cc0-1.0 #region-us
|
# Dataset Card for severo/embellishments
Test: link to a space:
URL
URL
## Dataset Description
- Homepage: Digitised Books - Images identified as Embellishments - Homepage
- Point of Contact: Sylvain Lesage
### Dataset Summary
This small dataset contains the thumbnails of the first 100 entries of Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG. It has been uploaded to the Hub to reproduce the tutorial by Daniel van Strien: Using datasets for image search.
## Dataset Structure
### Data Instances
A typical row contains an image thumbnail, its filename, and the year of publication of the book it was extracted from.
An example looks as follows:
### Data Fields
- 'fname': the image filename.
- 'year': a string with the year of publication of the book from which the image has been extracted
- 'path': local path to the image
- 'img': a thumbnail of the image with a max height and width of 224 pixels
### Data Splits
The dataset only contains 100 rows, in a single 'train' split.
## Dataset Creation
### Curation Rationale
This dataset was chosen by Daniel van Strien for his tutorial Using datasets for image search, which includes the code in Python to do it.
### Source Data
#### Initial Data Collection and Normalization
As stated on the British Library webpage:
> The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.d BCP-47 code is 'en'.
#### Who are the source data producers?
British Library, British Library Labs, Adrian Edwards (Curator), Neil Fitzgerald (Contributor ORCID)
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
This is a toy dataset that aims at:
- validating the process described in the tutorial Using datasets for image search by Daniel van Strien,
- showing the dataset viewer on an image dataset.
## Additional Information
### Dataset Curators
The dataset was created by Sylvain Lesage at Hugging Face, to replicate the tutorial Using datasets for image search by Daniel van Strien.
### Licensing Information
CC0 1.0 Universal Public Domain
| [
"# Dataset Card for severo/embellishments\n\nTest: link to a space:\n\nURL\n\nURL",
"## Dataset Description\n\n- Homepage: Digitised Books - Images identified as Embellishments - Homepage\n- Point of Contact: Sylvain Lesage",
"### Dataset Summary\n\nThis small dataset contains the thumbnails of the first 100 entries of Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG. It has been uploaded to the Hub to reproduce the tutorial by Daniel van Strien: Using datasets for image search.",
"## Dataset Structure",
"### Data Instances\n\nA typical row contains an image thumbnail, its filename, and the year of publication of the book it was extracted from.\n\nAn example looks as follows:",
"### Data Fields\n\n- 'fname': the image filename.\n- 'year': a string with the year of publication of the book from which the image has been extracted\n- 'path': local path to the image\n- 'img': a thumbnail of the image with a max height and width of 224 pixels",
"### Data Splits\n\nThe dataset only contains 100 rows, in a single 'train' split.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was chosen by Daniel van Strien for his tutorial Using datasets for image search, which includes the code in Python to do it.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAs stated on the British Library webpage:\n> The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.d BCP-47 code is 'en'.",
"#### Who are the source data producers?\n\nBritish Library, British Library Labs, Adrian Edwards (Curator), Neil Fitzgerald (Contributor ORCID)",
"### Annotations\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\n[N/A]",
"#### Who are the annotators?\n\n[N/A]",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\n[N/A]",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\nThis is a toy dataset that aims at:\n- validating the process described in the tutorial Using datasets for image search by Daniel van Strien,\n- showing the dataset viewer on an image dataset.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Sylvain Lesage at Hugging Face, to replicate the tutorial Using datasets for image search by Daniel van Strien.",
"### Licensing Information\n\nCC0 1.0 Universal Public Domain"
] | [
"TAGS\n#annotations_creators-no-annotation #size_categories-n<1K #source_datasets-original #license-cc0-1.0 #region-us \n",
"# Dataset Card for severo/embellishments\n\nTest: link to a space:\n\nURL\n\nURL",
"## Dataset Description\n\n- Homepage: Digitised Books - Images identified as Embellishments - Homepage\n- Point of Contact: Sylvain Lesage",
"### Dataset Summary\n\nThis small dataset contains the thumbnails of the first 100 entries of Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG. It has been uploaded to the Hub to reproduce the tutorial by Daniel van Strien: Using datasets for image search.",
"## Dataset Structure",
"### Data Instances\n\nA typical row contains an image thumbnail, its filename, and the year of publication of the book it was extracted from.\n\nAn example looks as follows:",
"### Data Fields\n\n- 'fname': the image filename.\n- 'year': a string with the year of publication of the book from which the image has been extracted\n- 'path': local path to the image\n- 'img': a thumbnail of the image with a max height and width of 224 pixels",
"### Data Splits\n\nThe dataset only contains 100 rows, in a single 'train' split.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was chosen by Daniel van Strien for his tutorial Using datasets for image search, which includes the code in Python to do it.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAs stated on the British Library webpage:\n> The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.d BCP-47 code is 'en'.",
"#### Who are the source data producers?\n\nBritish Library, British Library Labs, Adrian Edwards (Curator), Neil Fitzgerald (Contributor ORCID)",
"### Annotations\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\n[N/A]",
"#### Who are the annotators?\n\n[N/A]",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\n[N/A]",
"### Discussion of Biases\n\n[N/A]",
"### Other Known Limitations\n\nThis is a toy dataset that aims at:\n- validating the process described in the tutorial Using datasets for image search by Daniel van Strien,\n- showing the dataset viewer on an image dataset.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was created by Sylvain Lesage at Hugging Face, to replicate the tutorial Using datasets for image search by Daniel van Strien.",
"### Licensing Information\n\nCC0 1.0 Universal Public Domain"
] | [
45,
21,
30,
77,
6,
44,
74,
25,
5,
40,
4,
105,
39,
17,
10,
14,
13,
8,
12,
13,
53,
5,
40,
12
] | [
"passage: TAGS\n#annotations_creators-no-annotation #size_categories-n<1K #source_datasets-original #license-cc0-1.0 #region-us \n# Dataset Card for severo/embellishments\n\nTest: link to a space:\n\nURL\n\nURL## Dataset Description\n\n- Homepage: Digitised Books - Images identified as Embellishments - Homepage\n- Point of Contact: Sylvain Lesage### Dataset Summary\n\nThis small dataset contains the thumbnails of the first 100 entries of Digitised Books - Images identified as Embellishments. c. 1510 - c. 1900. JPG. It has been uploaded to the Hub to reproduce the tutorial by Daniel van Strien: Using datasets for image search.## Dataset Structure### Data Instances\n\nA typical row contains an image thumbnail, its filename, and the year of publication of the book it was extracted from.\n\nAn example looks as follows:### Data Fields\n\n- 'fname': the image filename.\n- 'year': a string with the year of publication of the book from which the image has been extracted\n- 'path': local path to the image\n- 'img': a thumbnail of the image with a max height and width of 224 pixels### Data Splits\n\nThe dataset only contains 100 rows, in a single 'train' split.## Dataset Creation### Curation Rationale\n\nThis dataset was chosen by Daniel van Strien for his tutorial Using datasets for image search, which includes the code in Python to do it.### Source Data#### Initial Data Collection and Normalization\n\nAs stated on the British Library webpage:\n> The images were algorithmically gathered from 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature. The images are in .JPEG format.d BCP-47 code is 'en'."
] |
1cc5ff914b850c92808d1a9c92082d8d6101b165 | EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
This dataset reflects the spelling inconsistencies characteristic of Middle English.
| Qilex/EN-ME | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:10K<n<100K",
"language:en",
"language:me",
"license:afl-3.0",
"middle english",
"region:us"
] | 2022-08-04T16:13:33+00:00 | {"language": ["en", "me"], "license": ["afl-3.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "task_categories": ["translation"], "pretty_name": "EN-ME", "tags": ["middle english"]} | 2022-08-11T20:25:34+00:00 | [] | [
"en",
"me"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-10K<n<100K #language-English #language-me #license-afl-3.0 #middle english #region-us
| EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
This dataset reflects the spelling inconsistencies characteristic of Middle English.
| [] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10K<n<100K #language-English #language-me #license-afl-3.0 #middle english #region-us \n"
] | [
54
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10K<n<100K #language-English #language-me #license-afl-3.0 #middle english #region-us \n"
] |
231c30478949022b78907358aa07d7eefee8c7e4 | The dataset contains the well balanced disaster and none disaster tweets selected from 2011, 2012, 2013, 2014, 2015, 2017 and 2018.
The predicted label is shown in predict column | sacculifer/dimbat_disaster_detection | [
"region:us"
] | 2022-08-04T22:08:54+00:00 | {} | 2022-08-05T12:18:31+00:00 | [] | [] | TAGS
#region-us
| The dataset contains the well balanced disaster and none disaster tweets selected from 2011, 2012, 2013, 2014, 2015, 2017 and 2018.
The predicted label is shown in predict column | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
cd13d68018443758e8888f79e79709e69e032e73 | ## Labels
- biological --- 1
- earthquake --- 2
- flood --- 3
- hurricane & tornado --- 4
- wildfire --- 5
- industrial --- 6
- societal --- 7
- transportation --- 8
- meteor --- 9
- haze --- 10 | sacculifer/dimbat_disaster_type_detection | [
"region:us"
] | 2022-08-04T22:39:30+00:00 | {} | 2022-08-05T08:35:34+00:00 | [] | [] | TAGS
#region-us
| ## Labels
- biological --- 1
- earthquake --- 2
- flood --- 3
- hurricane & tornado --- 4
- wildfire --- 5
- industrial --- 6
- societal --- 7
- transportation --- 8
- meteor --- 9
- haze --- 10 | [
"## Labels\n- biological --- 1\n- earthquake --- 2\n- flood --- 3\n- hurricane & tornado --- 4\n- wildfire --- 5\n- industrial --- 6\n- societal --- 7\n- transportation --- 8\n- meteor --- 9\n- haze --- 10"
] | [
"TAGS\n#region-us \n",
"## Labels\n- biological --- 1\n- earthquake --- 2\n- flood --- 3\n- hurricane & tornado --- 4\n- wildfire --- 5\n- industrial --- 6\n- societal --- 7\n- transportation --- 8\n- meteor --- 9\n- haze --- 10"
] | [
6,
55
] | [
"passage: TAGS\n#region-us \n## Labels\n- biological --- 1\n- earthquake --- 2\n- flood --- 3\n- hurricane & tornado --- 4\n- wildfire --- 5\n- industrial --- 6\n- societal --- 7\n- transportation --- 8\n- meteor --- 9\n- haze --- 10"
] |
9f538df29ee297a7750ec7d270ab49b7810e8b31 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: Lvxue/finetuned-mt5-small-10epoch
* Dataset: wmt16
* Config: ro-en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Lvxue](https://huggingface.co/Lvxue) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-wmt16-d9e39a12-12565676 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-05T01:13:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt16"], "eval_info": {"task": "translation", "model": "Lvxue/finetuned-mt5-small-10epoch", "metrics": [], "dataset_name": "wmt16", "dataset_config": "ro-en", "dataset_split": "test", "col_mapping": {"source": "translation.en", "target": "translation.ro"}}} | 2022-08-05T01:14:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Translation
* Model: Lvxue/finetuned-mt5-small-10epoch
* Dataset: wmt16
* Config: ro-en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Lvxue for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Lvxue/finetuned-mt5-small-10epoch\n* Dataset: wmt16\n* Config: ro-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Lvxue for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Lvxue/finetuned-mt5-small-10epoch\n* Dataset: wmt16\n* Config: ro-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Lvxue for evaluating this model."
] | [
13,
94,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Lvxue/finetuned-mt5-small-10epoch\n* Dataset: wmt16\n* Config: ro-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Lvxue for evaluating this model."
] |
403489eb4daef8bd3b2f1b54cfb0ca07c1490ee5 |
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected], [email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-3.0: Creative Commons Attribution 3.0 Unported
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
| vector/test_demo | [
"language_creators:found",
"language:cn",
"region:us"
] | 2022-08-05T06:00:25+00:00 | {"language_creators": ["found"], "language": ["cn"], "annotaeators": ["found"]} | 2022-08-15T08:09:12+00:00 | [] | [
"cn"
] | TAGS
#language_creators-found #language-cn #region-us
|
### Dataset Summary
Placeholder
You can load the dataset via:
The data loader can be found here.
#### website
None (See Repository)
#### paper
URL
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
None (See Repository)
#### Download
URL
#### Paper
URL
#### BibTex
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "URL
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
Faisal Ladhak, Esin Durmus
#### Contact Email
faisal@URL, esdurmus@URL
#### Has a Leaderboard?
no
### Languages and Intended Use
#### Multilingual?
yes
#### Covered Dialects
Dataset does not have multiple dialects per language.
#### Covered Languages
'English', 'Spanish, Castilian', 'Portuguese', 'French', 'German', 'Russian', 'Italian', 'Indonesian', 'Dutch, Flemish', 'Arabic', 'Chinese', 'Vietnamese', 'Thai', 'Japanese', 'Korean', 'Hindi', 'Czech', 'Turkish'
#### Whose Language?
No information about the user demographic is available.
#### License
cc-by-3.0: Creative Commons Attribution 3.0 Unported
#### Intended Use
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
Summarization
#### Communicative Goal
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
'academic'
#### Curation Organization(s)
Columbia University
#### Dataset Creators
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
yes
#### Unique Language Coverage
yes
#### Difference from other GEM datasets
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
yes
#### GEM Modifications
'other'
#### Modification Details
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
Ability to summarize content across different languages.
#### Metrics
'ROUGE'
#### Proposed Evaluation
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
no
### Language Data
#### How was Language Data Obtained?
'Found'
#### Where was it found?
'Single website'
#### Language Producers
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
not validated
#### Was Data Filtered?
not filtered
### Structured Annotations
#### Additional Annotations?
none
#### Annotation Service?
no
### Consent
#### Any Consent Policy?
yes
#### Consent Policy Details
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
no PII
#### Justification for no PII
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
no
### Discussion of Biases
#### Any Documented Social Biases?
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
'non-commercial use only'
#### Copyright Restrictions on the Language Data
'non-commercial use only'
### Known Technical Limitations
| [
"### Dataset Summary \nPlaceholder\nYou can load the dataset via:\n\nThe data loader can be found here.",
"#### website\nNone (See Repository)",
"#### paper\nURL",
"#### authors\nFaisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)",
"## Dataset Overview",
"### Where to find the Data and its Documentation",
"#### Webpage\n\n\nNone (See Repository)",
"#### Download\n\n\nURL",
"#### Paper\n\n\nURL",
"#### BibTex\n\n\n@inproceedings{ladhak-etal-2020-wikilingua,\n title = \"{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization\",\n author = \"Ladhak, Faisal and\n Durmus, Esin and\n Cardie, Claire and\n McKeown, Kathleen\",\n booktitle = \"Findings of the Association for Computational Linguistics: EMNLP 2020\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/2020.findings-emnlp.360\",\n pages = \"4034--4048\",\n abstract = \"We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.\",\n}",
"#### Contact Name\n\n\n\nFaisal Ladhak, Esin Durmus",
"#### Contact Email\n\n\nfaisal@URL, esdurmus@URL",
"#### Has a Leaderboard?\n\n\nno",
"### Languages and Intended Use",
"#### Multilingual?\n\n\n\nyes",
"#### Covered Dialects\n\n\nDataset does not have multiple dialects per language.",
"#### Covered Languages\n\n\n\n'English', 'Spanish, Castilian', 'Portuguese', 'French', 'German', 'Russian', 'Italian', 'Indonesian', 'Dutch, Flemish', 'Arabic', 'Chinese', 'Vietnamese', 'Thai', 'Japanese', 'Korean', 'Hindi', 'Czech', 'Turkish'",
"#### Whose Language?\n\n\nNo information about the user demographic is available.",
"#### License\n\n\n\ncc-by-3.0: Creative Commons Attribution 3.0 Unported",
"#### Intended Use\n\n\nThe dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.",
"#### Primary Task\n\n\nSummarization",
"#### Communicative Goal\n\n\n\nProduce a high quality summary for the given input article.",
"### Credit",
"#### Curation Organization Type(s)\n\n\n'academic'",
"#### Curation Organization(s)\n\n\nColumbia University",
"#### Dataset Creators\n\n\nFaisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)",
"#### Who added the Dataset to GEM?\n\n\nJenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)",
"### Dataset Structure",
"#### Data Fields\n\n\ngem_id -- The id for the data instance.\nsource_language -- The language of the source article.\ntarget_language -- The language of the target summary.\nsource -- The source document.",
"#### Example Instance\n\n\n\n{\n \"gem_id\": \"wikilingua_crosslingual-train-12345\",\n \"gem_parent_id\": \"wikilingua_crosslingual-train-12345\",\n \"source_language\": \"fr\",\n \"target_language\": \"de\",\n \"source\": \"Document in fr\",\n \"target\": \"Summary in de\",\n}",
"#### Data Splits\n\n\n\nThe data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.",
"#### Splitting Criteria\n\n\n\nThe data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.",
"## Dataset in GEM",
"### Rationale for Inclusion in GEM",
"#### Why is the Dataset in GEM?\n\n\n\nThis dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.",
"#### Similar Datasets\n\n\n\nyes",
"#### Unique Language Coverage\n\n\n\nyes",
"#### Difference from other GEM datasets\n\n\n\nXSum covers English news articles, and MLSum covers news articles in German and Spanish. \nIn contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.",
"#### Ability that the Dataset measures\n\n\n\nThe ability to generate quality summaries across multiple languages.",
"### GEM-Specific Curation",
"#### Modificatied for GEM?\n\n\n\nyes",
"#### GEM Modifications\n\n\n\n'other'",
"#### Modification Details\n\n\n\nPrevious version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.",
"#### Additional Splits?\n\n\n\nno",
"### Getting Started with the Task",
"## Previous Results",
"### Previous Results",
"#### Measured Model Abilities\n\n\n\nAbility to summarize content across different languages.",
"#### Metrics\n\n\n\n'ROUGE'",
"#### Proposed Evaluation\n\n\n\nROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.",
"#### Previous results available?\n\n\n\nno",
"## Dataset Curation",
"### Original Curation",
"#### Original Curation Rationale\n\n\n\nThe dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.",
"#### Communicative Goal\n\n\n\nGiven an input article, produce a high quality summary of the article in the target language.",
"#### Sourced from Different Sources\n\n\n\nno",
"### Language Data",
"#### How was Language Data Obtained?\n\n\n\n'Found'",
"#### Where was it found?\n\n\n\n'Single website'",
"#### Language Producers\n\n\n\nWikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.",
"#### Topics Covered\n\n\n\nThe articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.",
"#### Data Validation\n\n\n\nnot validated",
"#### Was Data Filtered?\n\n\n\nnot filtered",
"### Structured Annotations",
"#### Additional Annotations?\n\n\n\n\nnone",
"#### Annotation Service?\n\n\n\nno",
"### Consent",
"#### Any Consent Policy?\n\n\n\nyes",
"#### Consent Policy Details\n\n\n\n(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.",
"#### Other Consented Downstream Use\n\n\n\nThe data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.",
"### Private Identifying Information (PII)",
"#### Contains PII?\n\n\n\n\nno PII",
"#### Justification for no PII\n\n\n\nOnly the article text and summaries were collected. No user information was retained in the dataset.",
"### Maintenance",
"#### Any Maintenance Plan?\n\n\n\nno",
"## Broader Social Context",
"### Previous Work on the Social Impact of the Dataset",
"#### Usage of Models based on the Data\n\n\n\nyes - other datasets featuring the same task",
"### Impact on Under-Served Communities",
"#### Addresses needs of underserved Communities?\n\n\n\nno",
"### Discussion of Biases",
"#### Any Documented Social Biases?\n\n\n\nyes",
"## Considerations for Using the Data",
"### PII Risks and Liability",
"### Licenses",
"#### Copyright Restrictions on the Dataset\n\n\n\n'non-commercial use only'",
"#### Copyright Restrictions on the Language Data\n\n\n\n'non-commercial use only'",
"### Known Technical Limitations"
] | [
"TAGS\n#language_creators-found #language-cn #region-us \n",
"### Dataset Summary \nPlaceholder\nYou can load the dataset via:\n\nThe data loader can be found here.",
"#### website\nNone (See Repository)",
"#### paper\nURL",
"#### authors\nFaisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)",
"## Dataset Overview",
"### Where to find the Data and its Documentation",
"#### Webpage\n\n\nNone (See Repository)",
"#### Download\n\n\nURL",
"#### Paper\n\n\nURL",
"#### BibTex\n\n\n@inproceedings{ladhak-etal-2020-wikilingua,\n title = \"{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization\",\n author = \"Ladhak, Faisal and\n Durmus, Esin and\n Cardie, Claire and\n McKeown, Kathleen\",\n booktitle = \"Findings of the Association for Computational Linguistics: EMNLP 2020\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/2020.findings-emnlp.360\",\n pages = \"4034--4048\",\n abstract = \"We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.\",\n}",
"#### Contact Name\n\n\n\nFaisal Ladhak, Esin Durmus",
"#### Contact Email\n\n\nfaisal@URL, esdurmus@URL",
"#### Has a Leaderboard?\n\n\nno",
"### Languages and Intended Use",
"#### Multilingual?\n\n\n\nyes",
"#### Covered Dialects\n\n\nDataset does not have multiple dialects per language.",
"#### Covered Languages\n\n\n\n'English', 'Spanish, Castilian', 'Portuguese', 'French', 'German', 'Russian', 'Italian', 'Indonesian', 'Dutch, Flemish', 'Arabic', 'Chinese', 'Vietnamese', 'Thai', 'Japanese', 'Korean', 'Hindi', 'Czech', 'Turkish'",
"#### Whose Language?\n\n\nNo information about the user demographic is available.",
"#### License\n\n\n\ncc-by-3.0: Creative Commons Attribution 3.0 Unported",
"#### Intended Use\n\n\nThe dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.",
"#### Primary Task\n\n\nSummarization",
"#### Communicative Goal\n\n\n\nProduce a high quality summary for the given input article.",
"### Credit",
"#### Curation Organization Type(s)\n\n\n'academic'",
"#### Curation Organization(s)\n\n\nColumbia University",
"#### Dataset Creators\n\n\nFaisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)",
"#### Who added the Dataset to GEM?\n\n\nJenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)",
"### Dataset Structure",
"#### Data Fields\n\n\ngem_id -- The id for the data instance.\nsource_language -- The language of the source article.\ntarget_language -- The language of the target summary.\nsource -- The source document.",
"#### Example Instance\n\n\n\n{\n \"gem_id\": \"wikilingua_crosslingual-train-12345\",\n \"gem_parent_id\": \"wikilingua_crosslingual-train-12345\",\n \"source_language\": \"fr\",\n \"target_language\": \"de\",\n \"source\": \"Document in fr\",\n \"target\": \"Summary in de\",\n}",
"#### Data Splits\n\n\n\nThe data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.",
"#### Splitting Criteria\n\n\n\nThe data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.",
"## Dataset in GEM",
"### Rationale for Inclusion in GEM",
"#### Why is the Dataset in GEM?\n\n\n\nThis dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.",
"#### Similar Datasets\n\n\n\nyes",
"#### Unique Language Coverage\n\n\n\nyes",
"#### Difference from other GEM datasets\n\n\n\nXSum covers English news articles, and MLSum covers news articles in German and Spanish. \nIn contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.",
"#### Ability that the Dataset measures\n\n\n\nThe ability to generate quality summaries across multiple languages.",
"### GEM-Specific Curation",
"#### Modificatied for GEM?\n\n\n\nyes",
"#### GEM Modifications\n\n\n\n'other'",
"#### Modification Details\n\n\n\nPrevious version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.",
"#### Additional Splits?\n\n\n\nno",
"### Getting Started with the Task",
"## Previous Results",
"### Previous Results",
"#### Measured Model Abilities\n\n\n\nAbility to summarize content across different languages.",
"#### Metrics\n\n\n\n'ROUGE'",
"#### Proposed Evaluation\n\n\n\nROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.",
"#### Previous results available?\n\n\n\nno",
"## Dataset Curation",
"### Original Curation",
"#### Original Curation Rationale\n\n\n\nThe dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.",
"#### Communicative Goal\n\n\n\nGiven an input article, produce a high quality summary of the article in the target language.",
"#### Sourced from Different Sources\n\n\n\nno",
"### Language Data",
"#### How was Language Data Obtained?\n\n\n\n'Found'",
"#### Where was it found?\n\n\n\n'Single website'",
"#### Language Producers\n\n\n\nWikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.",
"#### Topics Covered\n\n\n\nThe articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.",
"#### Data Validation\n\n\n\nnot validated",
"#### Was Data Filtered?\n\n\n\nnot filtered",
"### Structured Annotations",
"#### Additional Annotations?\n\n\n\n\nnone",
"#### Annotation Service?\n\n\n\nno",
"### Consent",
"#### Any Consent Policy?\n\n\n\nyes",
"#### Consent Policy Details\n\n\n\n(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.",
"#### Other Consented Downstream Use\n\n\n\nThe data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.",
"### Private Identifying Information (PII)",
"#### Contains PII?\n\n\n\n\nno PII",
"#### Justification for no PII\n\n\n\nOnly the article text and summaries were collected. No user information was retained in the dataset.",
"### Maintenance",
"#### Any Maintenance Plan?\n\n\n\nno",
"## Broader Social Context",
"### Previous Work on the Social Impact of the Dataset",
"#### Usage of Models based on the Data\n\n\n\nyes - other datasets featuring the same task",
"### Impact on Under-Served Communities",
"#### Addresses needs of underserved Communities?\n\n\n\nno",
"### Discussion of Biases",
"#### Any Documented Social Biases?\n\n\n\nyes",
"## Considerations for Using the Data",
"### PII Risks and Liability",
"### Licenses",
"#### Copyright Restrictions on the Dataset\n\n\n\n'non-commercial use only'",
"#### Copyright Restrictions on the Language Data\n\n\n\n'non-commercial use only'",
"### Known Technical Limitations"
] | [
18,
25,
11,
4,
44,
5,
11,
12,
4,
4,
387,
12,
14,
8,
9,
7,
18,
107,
16,
15,
35,
9,
19,
3,
14,
10,
46,
31,
7,
44,
90,
37,
40,
6,
12,
53,
7,
8,
75,
22,
10,
10,
9,
76,
9,
9,
3,
4,
21,
10,
53,
7,
5,
5,
121,
26,
9,
4,
14,
11,
37,
48,
9,
10,
8,
11,
7,
4,
8,
222,
48,
10,
10,
30,
5,
9,
6,
12,
21,
10,
13,
8,
11,
8,
9,
4,
19,
19,
8
] | [
"passage: TAGS\n#language_creators-found #language-cn #region-us \n### Dataset Summary \nPlaceholder\nYou can load the dataset via:\n\nThe data loader can be found here.#### website\nNone (See Repository)#### paper\nURL#### authors\nFaisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)## Dataset Overview### Where to find the Data and its Documentation#### Webpage\n\n\nNone (See Repository)#### Download\n\n\nURL#### Paper\n\n\nURL",
"passage: #### BibTex\n\n\n@inproceedings{ladhak-etal-2020-wikilingua,\n title = \"{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization\",\n author = \"Ladhak, Faisal and\n Durmus, Esin and\n Cardie, Claire and\n McKeown, Kathleen\",\n booktitle = \"Findings of the Association for Computational Linguistics: EMNLP 2020\",\n month = nov,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/2020.findings-emnlp.360\",\n pages = \"4034--4048\",\n abstract = \"We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.\",\n}#### Contact Name\n\n\n\nFaisal Ladhak, Esin Durmus#### Contact Email\n\n\nfaisal@URL, esdurmus@URL#### Has a Leaderboard?\n\n\nno### Languages and Intended Use#### Multilingual?\n\n\n\nyes#### Covered Dialects\n\n\nDataset does not have multiple dialects per language.#### Covered Languages\n\n\n\n'English', 'Spanish, Castilian', 'Portuguese', 'French', 'German', 'Russian', 'Italian', 'Indonesian', 'Dutch, Flemish', 'Arabic', 'Chinese', 'Vietnamese', 'Thai', 'Japanese', 'Korean', 'Hindi', 'Czech', 'Turkish'#### Whose Language?\n\n\nNo information about the user demographic is available.#### License\n\n\n\ncc-by-3.0: Creative Commons Attribution 3.0 Unported#### Intended Use\n\n\nThe dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.#### Primary Task\n\n\nSummarization#### Communicative Goal\n\n\n\nProduce a high quality summary for the given input article.### Credit#### Curation Organization Type(s)\n\n\n'academic'#### Curation Organization(s)\n\n\nColumbia University#### Dataset Creators\n\n\nFaisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)#### Who added the Dataset to GEM?\n\n\nJenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)### Dataset Structure#### Data Fields\n\n\ngem_id -- The id for the data instance.\nsource_language -- The language of the source article.\ntarget_language -- The language of the target summary.\nsource -- The source document.",
"passage: #### Example Instance\n\n\n\n{\n \"gem_id\": \"wikilingua_crosslingual-train-12345\",\n \"gem_parent_id\": \"wikilingua_crosslingual-train-12345\",\n \"source_language\": \"fr\",\n \"target_language\": \"de\",\n \"source\": \"Document in fr\",\n \"target\": \"Summary in de\",\n}#### Data Splits\n\n\n\nThe data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.#### Splitting Criteria\n\n\n\nThe data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.## Dataset in GEM### Rationale for Inclusion in GEM#### Why is the Dataset in GEM?\n\n\n\nThis dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.#### Similar Datasets\n\n\n\nyes#### Unique Language Coverage\n\n\n\nyes#### Difference from other GEM datasets\n\n\n\nXSum covers English news articles, and MLSum covers news articles in German and Spanish. \nIn contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.#### Ability that the Dataset measures\n\n\n\nThe ability to generate quality summaries across multiple languages.### GEM-Specific Curation#### Modificatied for GEM?\n\n\n\nyes#### GEM Modifications\n\n\n\n'other'#### Modification Details\n\n\n\nPrevious version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.#### Additional Splits?\n\n\n\nno### Getting Started with the Task## Previous Results### Previous Results#### Measured Model Abilities\n\n\n\nAbility to summarize content across different languages.#### Metrics\n\n\n\n'ROUGE'#### Proposed Evaluation\n\n\n\nROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.#### Previous results available?\n\n\n\nno## Dataset Curation### Original Curation",
"passage: #### Original Curation Rationale\n\n\n\nThe dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.#### Communicative Goal\n\n\n\nGiven an input article, produce a high quality summary of the article in the target language.#### Sourced from Different Sources\n\n\n\nno### Language Data#### How was Language Data Obtained?\n\n\n\n'Found'#### Where was it found?\n\n\n\n'Single website'#### Language Producers\n\n\n\nWikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.#### Topics Covered\n\n\n\nThe articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.#### Data Validation\n\n\n\nnot validated#### Was Data Filtered?\n\n\n\nnot filtered### Structured Annotations#### Additional Annotations?\n\n\n\n\nnone#### Annotation Service?\n\n\n\nno### Consent#### Any Consent Policy?\n\n\n\nyes#### Consent Policy Details\n\n\n\n(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.#### Other Consented Downstream Use\n\n\n\nThe data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.### Private Identifying Information (PII)#### Contains PII?\n\n\n\n\nno PII"
] |
d7b7d99d2d2617f6f1c221d1729ad6a9373a8ee0 | TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | lingophilonaut/dummy_json | [
"region:us"
] | 2022-08-05T06:21:08+00:00 | {} | 2022-08-05T06:27:14+00:00 | [] | [] | TAGS
#region-us
| TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: URL
---
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
6,
10,
125,
24,
6,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @github-username for adding this dataset."
] |
287b269de92b7833e9d2a27177dfad0d1dec0eff |
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | arize-ai/human_actions_quality_drift | [
"task_categories:image-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"language:en",
"license:mit",
"region:us"
] | 2022-08-05T07:18:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "sentiment-classification-reviews-with-drift"} | 2022-08-05T07:41:46+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us
|
# Dataset Card for 'reviews_with_drift'
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.
### Supported Tasks and Leaderboards
'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @fjcasti1 for adding this dataset. | [
"# Dataset Card for 'reviews_with_drift'",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.",
"### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).",
"### Languages\n\nText is mainly written in english.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @fjcasti1 for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n",
"# Dataset Card for 'reviews_with_drift'",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.",
"### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).",
"### Languages\n\nText is mainly written in english.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @fjcasti1 for adding this dataset."
] | [
95,
13,
125,
4,
120,
50,
12,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
19
] | [
"passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n# Dataset Card for 'reviews_with_drift'## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).### Languages\n\nText is mainly written in english.## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information"
] |
e6a474f5ff4133338b4b9b5e393bad65e787b152 |
## Dataset Description
- **Repository:** [SLED Github repository](https://github.com/Mivg/SLED)
- **Paper:** [Efficient Long-Text Understanding with Short-Text Models
](https://arxiv.org/pdf/2208.00748.pdf)
# Dataset Card for SCROLLS
## Overview
This dataset is based on the [SCROLLS](https://huggingface.co/datasets/tau/scrolls) dataset ([paper](https://arxiv.org/pdf/2201.03533.pdf)), the [SQuAD 1.1](https://huggingface.co/datasets/squad) dataset and the [HotpotQA](https://huggingface.co/datasets/hotpot_qa) dataset.
It doesn't contain any unpblished data, but includes the configuration needed for the [Efficient Long-Text Understanding with Short-Text Models
](https://arxiv.org/pdf/2208.00748.pdf) paper.
## Tasks
The tasks included are:
#### GovReport ([Huang et al., 2021](https://arxiv.org/pdf/2104.02112.pdf))
GovReport is a summarization dataset of reports addressing various national policy issues published by the
Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.
The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets;
for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.
#### SummScreenFD ([Chen et al., 2021](https://arxiv.org/pdf/2104.07091.pdf))
SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).
Given a transcript of a specific episode, the goal is to produce the episode's recap.
The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts.
For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows,
making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows.
Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.
#### QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938.pdf))
QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains.
The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control,
and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.
Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions,
while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.
#### NarrativeQA ([Kočiský et al., 2021](https://arxiv.org/pdf/1712.07040.pdf))
NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.
Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs,
resulting in about 30 questions and answers for each of the 1,567 books and scripts.
They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.
Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).
#### Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011.pdf))
Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).
Questions were written by NLP practitioners after reading only the title and abstract of the papers,
while another set of NLP practitioners annotated the answers given the entire document.
Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
#### QuALITY ([Pang et al., 2021](https://arxiv.org/pdf/2112.08608.pdf))
QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg,
the Open American National Corpus, and more.
Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them,
human annotators must read large portions of the given document.
Reference answers were then calculated using the majority vote between of the annotators and writer's answers.
To measure the difficulty of their questions, Pang et al. conducted a speed validation process,
where another set of annotators were asked to answer questions given only a short period of time to skim through the document.
As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.
#### ContractNLI ([Koreeda and Manning, 2021](https://arxiv.org/pdf/2110.01799.pdf))
Contract NLI is a natural language inference dataset in the legal domain.
Given a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.
The NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.
The dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.
#### SQuAD 1.1 ([Rajpurkar et al., 2016](https://arxiv.org/pdf/1606.05250.pdf))
Stanford Question Answering Dataset (SQuAD) is a reading comprehension \
dataset, consisting of questions posed by crowdworkers on a set of Wikipedia \
articles, where the answer to every question is a segment of text, or span, \
from the corresponding reading passage, or the question might be unanswerable.
#### HotpotQA ([Yang et al., 2018](https://arxiv.org/pdf/1809.09600.pdf))
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features:
(1) the questions require finding and reasoning over multiple supporting documents to answer;
(2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas;
(3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions;
(4) we offer a new type of factoid comparison questions to testQA systems’ ability to extract relevant facts and perform necessary comparison.
## Data Fields
All the datasets in the benchmark are in the same input-output format
- `input`: a `string` feature. The input document.
- `input_prefix`: an optional `string` feature, for the datasets containing prefix (e.g. question)
- `output`: a `string` feature. The target.
- `id`: a `string` feature. Unique per input.
- `pid`: a `string` feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).
The dataset that contain `input_prefix` are:
- SQuAD - the question
- HotpotQA - the question
- qmsum - the query
- qasper - the question
- narrative_qa - the question
- quality - the question + the four choices
- contract_nli - the hypothesis
## Controlled experiments
To test multiple properties of SLED, we modify SQuAD 1.1 [Rajpurkar et al., 2016](https://arxiv.org/pdf/1606.05250.pdf)
and HotpotQA [Yang et al., 2018](https://arxiv.org/pdf/1809.09600.pdf) to create a few controlled experiments settings.
Those are accessible via the following configurations:
- squad - Contains the original version of SQuAD 1.1 (question + passage)
- squad_ordered_distractors - For each example, 9 random distrctor passages are concatenated (separated by '\n')
- squad_shuffled_distractors - For each example, 9 random distrctor passages are added (separated by '\n'), and jointly the 10 passages are randomly shuffled
- hotpotqa - A clean version of HotpotQA, where each input contains only the two gold passages (separated by '\n')
- hotpotqa_second_only - In each example, the input contains only the second gold passage
## Citation
If you use this dataset, **please make sure to cite all the original dataset papers as well SCROLLS.** [[bibtex](https://drive.google.com/uc?export=download&id=1IUYIzQD9DPsECw0JWkwk4Ildn8JOMtuU)]
```
@inproceedings{Ivgi2022EfficientLU,
title={Efficient Long-Text Understanding with Short-Text Models},
author={Maor Ivgi and Uri Shaham and Jonathan Berant},
year={2022}
}
``` | tau/sled | [
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:text-generation",
"task_ids:multiple-choice-qa",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"multi-hop-question-answering",
"query-based-summarization",
"long-texts",
"arxiv:2208.00748",
"arxiv:2201.03533",
"arxiv:2104.02112",
"arxiv:2104.07091",
"arxiv:2104.05938",
"arxiv:1712.07040",
"arxiv:2105.03011",
"arxiv:2112.08608",
"arxiv:2110.01799",
"arxiv:1606.05250",
"arxiv:1809.09600",
"region:us"
] | 2022-08-05T07:54:23+00:00 | {"language": ["en"], "license": ["mit"], "task_categories": ["question-answering", "summarization", "text-generation"], "task_ids": ["multiple-choice-qa", "natural-language-inference"], "configs": ["gov_report", "summ_screen_fd", "qmsum", "qasper", "narrative_qa", "quality", "contract_nli", "squad", "squad_shuffled_distractors", "squad_ordered_distractors", "hotpotqa", "hotpotqa_second_only"], "tags": ["multi-hop-question-answering", "query-based-summarization", "long-texts"]} | 2022-10-25T06:33:44+00:00 | [
"2208.00748",
"2201.03533",
"2104.02112",
"2104.07091",
"2104.05938",
"1712.07040",
"2105.03011",
"2112.08608",
"2110.01799",
"1606.05250",
"1809.09600"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-natural-language-inference #language-English #license-mit #multi-hop-question-answering #query-based-summarization #long-texts #arxiv-2208.00748 #arxiv-2201.03533 #arxiv-2104.02112 #arxiv-2104.07091 #arxiv-2104.05938 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2112.08608 #arxiv-2110.01799 #arxiv-1606.05250 #arxiv-1809.09600 #region-us
|
## Dataset Description
- Repository: SLED Github repository
- Paper: Efficient Long-Text Understanding with Short-Text Models
# Dataset Card for SCROLLS
## Overview
This dataset is based on the SCROLLS dataset (paper), the SQuAD 1.1 dataset and the HotpotQA dataset.
It doesn't contain any unpblished data, but includes the configuration needed for the Efficient Long-Text Understanding with Short-Text Models
paper.
## Tasks
The tasks included are:
#### GovReport (Huang et al., 2021)
GovReport is a summarization dataset of reports addressing various national policy issues published by the
Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.
The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets;
for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.
#### SummScreenFD (Chen et al., 2021)
SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).
Given a transcript of a specific episode, the goal is to produce the episode's recap.
The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts.
For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows,
making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows.
Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.
#### QMSum (Zhong et al., 2021)
QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains.
The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control,
and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.
Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions,
while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.
#### NarrativeQA (Kočiský et al., 2021)
NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.
Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs,
resulting in about 30 questions and answers for each of the 1,567 books and scripts.
They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.
Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).
#### Qasper (Dasigi et al., 2021)
Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).
Questions were written by NLP practitioners after reading only the title and abstract of the papers,
while another set of NLP practitioners annotated the answers given the entire document.
Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.
#### QuALITY (Pang et al., 2021)
QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg,
the Open American National Corpus, and more.
Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them,
human annotators must read large portions of the given document.
Reference answers were then calculated using the majority vote between of the annotators and writer's answers.
To measure the difficulty of their questions, Pang et al. conducted a speed validation process,
where another set of annotators were asked to answer questions given only a short period of time to skim through the document.
As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.
#### ContractNLI (Koreeda and Manning, 2021)
Contract NLI is a natural language inference dataset in the legal domain.
Given a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.
The NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.
The dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.
#### SQuAD 1.1 (Rajpurkar et al., 2016)
Stanford Question Answering Dataset (SQuAD) is a reading comprehension \
dataset, consisting of questions posed by crowdworkers on a set of Wikipedia \
articles, where the answer to every question is a segment of text, or span, \
from the corresponding reading passage, or the question might be unanswerable.
#### HotpotQA (Yang et al., 2018)
HotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features:
(1) the questions require finding and reasoning over multiple supporting documents to answer;
(2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas;
(3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions;
(4) we offer a new type of factoid comparison questions to testQA systems’ ability to extract relevant facts and perform necessary comparison.
## Data Fields
All the datasets in the benchmark are in the same input-output format
- 'input': a 'string' feature. The input document.
- 'input_prefix': an optional 'string' feature, for the datasets containing prefix (e.g. question)
- 'output': a 'string' feature. The target.
- 'id': a 'string' feature. Unique per input.
- 'pid': a 'string' feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).
The dataset that contain 'input_prefix' are:
- SQuAD - the question
- HotpotQA - the question
- qmsum - the query
- qasper - the question
- narrative_qa - the question
- quality - the question + the four choices
- contract_nli - the hypothesis
## Controlled experiments
To test multiple properties of SLED, we modify SQuAD 1.1 Rajpurkar et al., 2016
and HotpotQA Yang et al., 2018 to create a few controlled experiments settings.
Those are accessible via the following configurations:
- squad - Contains the original version of SQuAD 1.1 (question + passage)
- squad_ordered_distractors - For each example, 9 random distrctor passages are concatenated (separated by '\n')
- squad_shuffled_distractors - For each example, 9 random distrctor passages are added (separated by '\n'), and jointly the 10 passages are randomly shuffled
- hotpotqa - A clean version of HotpotQA, where each input contains only the two gold passages (separated by '\n')
- hotpotqa_second_only - In each example, the input contains only the second gold passage
If you use this dataset, please make sure to cite all the original dataset papers as well SCROLLS. [bibtex]
| [
"## Dataset Description\n- Repository: SLED Github repository\n- Paper: Efficient Long-Text Understanding with Short-Text Models",
"# Dataset Card for SCROLLS",
"## Overview\nThis dataset is based on the SCROLLS dataset (paper), the SQuAD 1.1 dataset and the HotpotQA dataset.\nIt doesn't contain any unpblished data, but includes the configuration needed for the Efficient Long-Text Understanding with Short-Text Models\n paper.",
"## Tasks\nThe tasks included are:",
"#### GovReport (Huang et al., 2021)\nGovReport is a summarization dataset of reports addressing various national policy issues published by the \nCongressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.\nThe reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; \nfor example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.",
"#### SummScreenFD (Chen et al., 2021)\nSummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).\nGiven a transcript of a specific episode, the goal is to produce the episode's recap.\nThe original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. \nFor SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, \nmaking it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. \nCommunity-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.",
"#### QMSum (Zhong et al., 2021)\nQMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. \nThe corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, \nand committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.\nAnnotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, \nwhile ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.",
"#### NarrativeQA (Kočiský et al., 2021)\nNarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.\nAnnotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, \nresulting in about 30 questions and answers for each of the 1,567 books and scripts.\nThey were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.\nEach question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).",
"#### Qasper (Dasigi et al., 2021)\nQasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).\nQuestions were written by NLP practitioners after reading only the title and abstract of the papers, \nwhile another set of NLP practitioners annotated the answers given the entire document.\nQasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.",
"#### QuALITY (Pang et al., 2021)\nQuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, \nthe Open American National Corpus, and more.\nExperienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, \nhuman annotators must read large portions of the given document. \nReference answers were then calculated using the majority vote between of the annotators and writer's answers.\nTo measure the difficulty of their questions, Pang et al. conducted a speed validation process, \nwhere another set of annotators were asked to answer questions given only a short period of time to skim through the document.\nAs a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.",
"#### ContractNLI (Koreeda and Manning, 2021)\nContract NLI is a natural language inference dataset in the legal domain.\nGiven a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.\nThe NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.\nThe dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.",
"#### SQuAD 1.1 (Rajpurkar et al., 2016) \nStanford Question Answering Dataset (SQuAD) is a reading comprehension \\\ndataset, consisting of questions posed by crowdworkers on a set of Wikipedia \\\narticles, where the answer to every question is a segment of text, or span, \\\nfrom the corresponding reading passage, or the question might be unanswerable.",
"#### HotpotQA (Yang et al., 2018) \nHotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features:\n(1) the questions require finding and reasoning over multiple supporting documents to answer;\n(2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas;\n(3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions;\n(4) we offer a new type of factoid comparison questions to testQA systems’ ability to extract relevant facts and perform necessary comparison.",
"## Data Fields\n\nAll the datasets in the benchmark are in the same input-output format\n\n- 'input': a 'string' feature. The input document.\n- 'input_prefix': an optional 'string' feature, for the datasets containing prefix (e.g. question)\n- 'output': a 'string' feature. The target.\n- 'id': a 'string' feature. Unique per input.\n- 'pid': a 'string' feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).\n\nThe dataset that contain 'input_prefix' are:\n- SQuAD - the question\n- HotpotQA - the question\n- qmsum - the query\n- qasper - the question\n- narrative_qa - the question\n- quality - the question + the four choices\n- contract_nli - the hypothesis",
"## Controlled experiments\nTo test multiple properties of SLED, we modify SQuAD 1.1 Rajpurkar et al., 2016 \nand HotpotQA Yang et al., 2018 to create a few controlled experiments settings.\nThose are accessible via the following configurations:\n- squad - Contains the original version of SQuAD 1.1 (question + passage)\n- squad_ordered_distractors - For each example, 9 random distrctor passages are concatenated (separated by '\\n')\n- squad_shuffled_distractors - For each example, 9 random distrctor passages are added (separated by '\\n'), and jointly the 10 passages are randomly shuffled\n- hotpotqa - A clean version of HotpotQA, where each input contains only the two gold passages (separated by '\\n')\n- hotpotqa_second_only - In each example, the input contains only the second gold passage\n\nIf you use this dataset, please make sure to cite all the original dataset papers as well SCROLLS. [bibtex]"
] | [
"TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-natural-language-inference #language-English #license-mit #multi-hop-question-answering #query-based-summarization #long-texts #arxiv-2208.00748 #arxiv-2201.03533 #arxiv-2104.02112 #arxiv-2104.07091 #arxiv-2104.05938 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2112.08608 #arxiv-2110.01799 #arxiv-1606.05250 #arxiv-1809.09600 #region-us \n",
"## Dataset Description\n- Repository: SLED Github repository\n- Paper: Efficient Long-Text Understanding with Short-Text Models",
"# Dataset Card for SCROLLS",
"## Overview\nThis dataset is based on the SCROLLS dataset (paper), the SQuAD 1.1 dataset and the HotpotQA dataset.\nIt doesn't contain any unpblished data, but includes the configuration needed for the Efficient Long-Text Understanding with Short-Text Models\n paper.",
"## Tasks\nThe tasks included are:",
"#### GovReport (Huang et al., 2021)\nGovReport is a summarization dataset of reports addressing various national policy issues published by the \nCongressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.\nThe reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; \nfor example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.",
"#### SummScreenFD (Chen et al., 2021)\nSummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).\nGiven a transcript of a specific episode, the goal is to produce the episode's recap.\nThe original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. \nFor SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, \nmaking it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. \nCommunity-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.",
"#### QMSum (Zhong et al., 2021)\nQMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. \nThe corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, \nand committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.\nAnnotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, \nwhile ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.",
"#### NarrativeQA (Kočiský et al., 2021)\nNarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.\nAnnotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, \nresulting in about 30 questions and answers for each of the 1,567 books and scripts.\nThey were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.\nEach question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).",
"#### Qasper (Dasigi et al., 2021)\nQasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).\nQuestions were written by NLP practitioners after reading only the title and abstract of the papers, \nwhile another set of NLP practitioners annotated the answers given the entire document.\nQasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.",
"#### QuALITY (Pang et al., 2021)\nQuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, \nthe Open American National Corpus, and more.\nExperienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, \nhuman annotators must read large portions of the given document. \nReference answers were then calculated using the majority vote between of the annotators and writer's answers.\nTo measure the difficulty of their questions, Pang et al. conducted a speed validation process, \nwhere another set of annotators were asked to answer questions given only a short period of time to skim through the document.\nAs a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.",
"#### ContractNLI (Koreeda and Manning, 2021)\nContract NLI is a natural language inference dataset in the legal domain.\nGiven a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.\nThe NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.\nThe dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.",
"#### SQuAD 1.1 (Rajpurkar et al., 2016) \nStanford Question Answering Dataset (SQuAD) is a reading comprehension \\\ndataset, consisting of questions posed by crowdworkers on a set of Wikipedia \\\narticles, where the answer to every question is a segment of text, or span, \\\nfrom the corresponding reading passage, or the question might be unanswerable.",
"#### HotpotQA (Yang et al., 2018) \nHotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features:\n(1) the questions require finding and reasoning over multiple supporting documents to answer;\n(2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas;\n(3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions;\n(4) we offer a new type of factoid comparison questions to testQA systems’ ability to extract relevant facts and perform necessary comparison.",
"## Data Fields\n\nAll the datasets in the benchmark are in the same input-output format\n\n- 'input': a 'string' feature. The input document.\n- 'input_prefix': an optional 'string' feature, for the datasets containing prefix (e.g. question)\n- 'output': a 'string' feature. The target.\n- 'id': a 'string' feature. Unique per input.\n- 'pid': a 'string' feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).\n\nThe dataset that contain 'input_prefix' are:\n- SQuAD - the question\n- HotpotQA - the question\n- qmsum - the query\n- qasper - the question\n- narrative_qa - the question\n- quality - the question + the four choices\n- contract_nli - the hypothesis",
"## Controlled experiments\nTo test multiple properties of SLED, we modify SQuAD 1.1 Rajpurkar et al., 2016 \nand HotpotQA Yang et al., 2018 to create a few controlled experiments settings.\nThose are accessible via the following configurations:\n- squad - Contains the original version of SQuAD 1.1 (question + passage)\n- squad_ordered_distractors - For each example, 9 random distrctor passages are concatenated (separated by '\\n')\n- squad_shuffled_distractors - For each example, 9 random distrctor passages are added (separated by '\\n'), and jointly the 10 passages are randomly shuffled\n- hotpotqa - A clean version of HotpotQA, where each input contains only the two gold passages (separated by '\\n')\n- hotpotqa_second_only - In each example, the input contains only the second gold passage\n\nIf you use this dataset, please make sure to cite all the original dataset papers as well SCROLLS. [bibtex]"
] | [
191,
34,
8,
68,
9,
120,
172,
153,
159,
113,
211,
155,
90,
134,
213,
246
] | [
"passage: TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-natural-language-inference #language-English #license-mit #multi-hop-question-answering #query-based-summarization #long-texts #arxiv-2208.00748 #arxiv-2201.03533 #arxiv-2104.02112 #arxiv-2104.07091 #arxiv-2104.05938 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2112.08608 #arxiv-2110.01799 #arxiv-1606.05250 #arxiv-1809.09600 #region-us \n## Dataset Description\n- Repository: SLED Github repository\n- Paper: Efficient Long-Text Understanding with Short-Text Models# Dataset Card for SCROLLS## Overview\nThis dataset is based on the SCROLLS dataset (paper), the SQuAD 1.1 dataset and the HotpotQA dataset.\nIt doesn't contain any unpblished data, but includes the configuration needed for the Efficient Long-Text Understanding with Short-Text Models\n paper.## Tasks\nThe tasks included are:#### GovReport (Huang et al., 2021)\nGovReport is a summarization dataset of reports addressing various national policy issues published by the \nCongressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.\nThe reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; \nfor example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.",
"passage: #### SummScreenFD (Chen et al., 2021)\nSummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).\nGiven a transcript of a specific episode, the goal is to produce the episode's recap.\nThe original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. \nFor SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, \nmaking it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. \nCommunity-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.#### QMSum (Zhong et al., 2021)\nQMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. \nThe corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, \nand committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.\nAnnotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, \nwhile ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.#### NarrativeQA (Kočiský et al., 2021)\nNarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.\nAnnotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, \nresulting in about 30 questions and answers for each of the 1,567 books and scripts.\nThey were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.\nEach question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).#### Qasper (Dasigi et al., 2021)\nQasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).\nQuestions were written by NLP practitioners after reading only the title and abstract of the papers, \nwhile another set of NLP practitioners annotated the answers given the entire document.\nQasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.",
"passage: #### QuALITY (Pang et al., 2021)\nQuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, \nthe Open American National Corpus, and more.\nExperienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, \nhuman annotators must read large portions of the given document. \nReference answers were then calculated using the majority vote between of the annotators and writer's answers.\nTo measure the difficulty of their questions, Pang et al. conducted a speed validation process, \nwhere another set of annotators were asked to answer questions given only a short period of time to skim through the document.\nAs a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.#### ContractNLI (Koreeda and Manning, 2021)\nContract NLI is a natural language inference dataset in the legal domain.\nGiven a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.\nThe NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.\nThe dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.#### SQuAD 1.1 (Rajpurkar et al., 2016) \nStanford Question Answering Dataset (SQuAD) is a reading comprehension \\\ndataset, consisting of questions posed by crowdworkers on a set of Wikipedia \\\narticles, where the answer to every question is a segment of text, or span, \\\nfrom the corresponding reading passage, or the question might be unanswerable.#### HotpotQA (Yang et al., 2018) \nHotpotQA is a new dataset with 113k Wikipedia-based question-answer pairs with four key features:\n(1) the questions require finding and reasoning over multiple supporting documents to answer;\n(2) the questions are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas;\n(3) we provide sentence-level supporting facts required for reasoning, allowingQA systems to reason with strong supervisionand explain the predictions;\n(4) we offer a new type of factoid comparison questions to testQA systems’ ability to extract relevant facts and perform necessary comparison."
] |
ac1325919b9de7b6daf6bff34d77ccff838ea52d |
Knowledge graph triple to answer verbalization dataset.
VANiLLa: Verbalized answers in natural language at large scale
| rony/VANiLLa | [
"license:mit",
"region:us"
] | 2022-08-05T08:32:04+00:00 | {"license": "mit"} | 2022-08-05T10:45:29+00:00 | [] | [] | TAGS
#license-mit #region-us
|
Knowledge graph triple to answer verbalization dataset.
VANiLLa: Verbalized answers in natural language at large scale
| [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
b59d979a9ee1e04ae424a714479df4baa273396e |
# Dataset Card for Norwegian PAWS-X
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NB AiLab](https://ai.nb.no/)
- **Repository:** [Norwegian PAWS-X Repository](#)
- **Point of Contact:** [[email protected]](mailto:[email protected])
### Dataset Summary
Norwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk.
### Languages
- Norwegian Bokmål (`nb`)
- Norwegian Nynorsk (`nn`)
## Dataset Structure
### Data Instances
Each instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other.
### Data Fields
- `id`: An identifier for each example (int32)
- `sentence1`: The first sentence in Norwegian (string)
- `sentence2`: The second sentence in Norwegian (string)
- `label`: Binary label, where '1' indicates the sentences are paraphrases and '0' indicates they are not (class_label: '0', '1')
### Data Splits
The dataset is divided into training, validation, and test sets. The exact numbers of instances in each split will be as per the original PAWS-X dataset.
## Dataset Creation
### Curation Rationale
Norwegian PAWS-X was created to extend the PAWS paraphrase identification task to the Norwegian language, including both Bokmål and Nynorsk standards. This promotes multilingual and cross-lingual research in paraphrase identification.
### Source Data
The source data consists of human-translated PAWS pairs in six languages. For the Norwegian PAWS-X dataset, these pairs were translated into Norwegian Bokmål and Nynorsk using FAIR’s No Language Left Behind 3.3B parameters model.
### Annotations
The dataset retains the original PAWS labels, which were created through a combination of expert and machine-generated annotations.
### Personal and Sensitive Information
There is no known personal or sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset helps in promoting the development of NLP technologies in Norwegian.
### Other Known Limitations
There may be potential issues related to the translation quality, as the translations were generated using a machine translation model.
## Additional Information
### Dataset Curators
The dataset was curated by researcher Javier de la Rosa.
### Licensing Information
Original PAWS-X License:
- The dataset may be freely used for any purpose, with acknowledgment of Google LLC as the data source being appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
Norwegian PAWS-X License:
- CC BY 4.0
| NbAiLab/norwegian-paws-x | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-paws",
"language:nb",
"language:nn",
"license:cc-by-4.0",
"region:us"
] | 2022-08-05T09:51:20+00:00 | {"annotations_creators": ["expert-generated", "machine-generated"], "language_creators": ["machine-generated"], "language": ["nb", "nn"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-paws"], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-classification", "semantic-similarity-scoring", "text-scoring", "multi-input-text-classification"], "pretty_name": "NbAiLab/norwegian-paws-x"} | 2023-08-18T10:26:40+00:00 | [] | [
"nb",
"nn"
] | TAGS
#task_categories-text-classification #task_ids-semantic-similarity-classification #task_ids-semantic-similarity-scoring #task_ids-text-scoring #task_ids-multi-input-text-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended|other-paws #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc-by-4.0 #region-us
|
# Dataset Card for Norwegian PAWS-X
## Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: NB AiLab
- Repository: Norwegian PAWS-X Repository
- Point of Contact: ai-lab@URL
### Dataset Summary
Norwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk.
### Languages
- Norwegian Bokmål ('nb')
- Norwegian Nynorsk ('nn')
## Dataset Structure
### Data Instances
Each instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other.
### Data Fields
- 'id': An identifier for each example (int32)
- 'sentence1': The first sentence in Norwegian (string)
- 'sentence2': The second sentence in Norwegian (string)
- 'label': Binary label, where '1' indicates the sentences are paraphrases and '0' indicates they are not (class_label: '0', '1')
### Data Splits
The dataset is divided into training, validation, and test sets. The exact numbers of instances in each split will be as per the original PAWS-X dataset.
## Dataset Creation
### Curation Rationale
Norwegian PAWS-X was created to extend the PAWS paraphrase identification task to the Norwegian language, including both Bokmål and Nynorsk standards. This promotes multilingual and cross-lingual research in paraphrase identification.
### Source Data
The source data consists of human-translated PAWS pairs in six languages. For the Norwegian PAWS-X dataset, these pairs were translated into Norwegian Bokmål and Nynorsk using FAIR’s No Language Left Behind 3.3B parameters model.
### Annotations
The dataset retains the original PAWS labels, which were created through a combination of expert and machine-generated annotations.
### Personal and Sensitive Information
There is no known personal or sensitive information in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset helps in promoting the development of NLP technologies in Norwegian.
### Other Known Limitations
There may be potential issues related to the translation quality, as the translations were generated using a machine translation model.
## Additional Information
### Dataset Curators
The dataset was curated by researcher Javier de la Rosa.
### Licensing Information
Original PAWS-X License:
- The dataset may be freely used for any purpose, with acknowledgment of Google LLC as the data source being appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
Norwegian PAWS-X License:
- CC BY 4.0
| [
"# Dataset Card for Norwegian PAWS-X",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: NB AiLab\n- Repository: Norwegian PAWS-X Repository\n- Point of Contact: ai-lab@URL",
"### Dataset Summary\n\nNorwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk.",
"### Languages\n\n- Norwegian Bokmål ('nb')\n- Norwegian Nynorsk ('nn')",
"## Dataset Structure",
"### Data Instances\n\nEach instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other.",
"### Data Fields\n\n- 'id': An identifier for each example (int32)\n- 'sentence1': The first sentence in Norwegian (string)\n- 'sentence2': The second sentence in Norwegian (string)\n- 'label': Binary label, where '1' indicates the sentences are paraphrases and '0' indicates they are not (class_label: '0', '1')",
"### Data Splits\n\nThe dataset is divided into training, validation, and test sets. The exact numbers of instances in each split will be as per the original PAWS-X dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nNorwegian PAWS-X was created to extend the PAWS paraphrase identification task to the Norwegian language, including both Bokmål and Nynorsk standards. This promotes multilingual and cross-lingual research in paraphrase identification.",
"### Source Data\n\nThe source data consists of human-translated PAWS pairs in six languages. For the Norwegian PAWS-X dataset, these pairs were translated into Norwegian Bokmål and Nynorsk using FAIR’s No Language Left Behind 3.3B parameters model.",
"### Annotations\n\nThe dataset retains the original PAWS labels, which were created through a combination of expert and machine-generated annotations.",
"### Personal and Sensitive Information\n\nThere is no known personal or sensitive information in this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset helps in promoting the development of NLP technologies in Norwegian.",
"### Other Known Limitations\n\nThere may be potential issues related to the translation quality, as the translations were generated using a machine translation model.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was curated by researcher Javier de la Rosa.",
"### Licensing Information\n\nOriginal PAWS-X License:\n- The dataset may be freely used for any purpose, with acknowledgment of Google LLC as the data source being appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.\n\nNorwegian PAWS-X License:\n- CC BY 4.0"
] | [
"TAGS\n#task_categories-text-classification #task_ids-semantic-similarity-classification #task_ids-semantic-similarity-scoring #task_ids-text-scoring #task_ids-multi-input-text-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended|other-paws #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Norwegian PAWS-X",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: NB AiLab\n- Repository: Norwegian PAWS-X Repository\n- Point of Contact: ai-lab@URL",
"### Dataset Summary\n\nNorwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk.",
"### Languages\n\n- Norwegian Bokmål ('nb')\n- Norwegian Nynorsk ('nn')",
"## Dataset Structure",
"### Data Instances\n\nEach instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other.",
"### Data Fields\n\n- 'id': An identifier for each example (int32)\n- 'sentence1': The first sentence in Norwegian (string)\n- 'sentence2': The second sentence in Norwegian (string)\n- 'label': Binary label, where '1' indicates the sentences are paraphrases and '0' indicates they are not (class_label: '0', '1')",
"### Data Splits\n\nThe dataset is divided into training, validation, and test sets. The exact numbers of instances in each split will be as per the original PAWS-X dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nNorwegian PAWS-X was created to extend the PAWS paraphrase identification task to the Norwegian language, including both Bokmål and Nynorsk standards. This promotes multilingual and cross-lingual research in paraphrase identification.",
"### Source Data\n\nThe source data consists of human-translated PAWS pairs in six languages. For the Norwegian PAWS-X dataset, these pairs were translated into Norwegian Bokmål and Nynorsk using FAIR’s No Language Left Behind 3.3B parameters model.",
"### Annotations\n\nThe dataset retains the original PAWS labels, which were created through a combination of expert and machine-generated annotations.",
"### Personal and Sensitive Information\n\nThere is no known personal or sensitive information in this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThis dataset helps in promoting the development of NLP technologies in Norwegian.",
"### Other Known Limitations\n\nThere may be potential issues related to the translation quality, as the translations were generated using a machine translation model.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset was curated by researcher Javier de la Rosa.",
"### Licensing Information\n\nOriginal PAWS-X License:\n- The dataset may be freely used for any purpose, with acknowledgment of Google LLC as the data source being appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.\n\nNorwegian PAWS-X License:\n- CC BY 4.0"
] | [
169,
10,
111,
33,
87,
23,
6,
36,
94,
44,
5,
59,
65,
34,
21,
8,
23,
31,
5,
20,
99
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-semantic-similarity-classification #task_ids-semantic-similarity-scoring #task_ids-text-scoring #task_ids-multi-input-text-classification #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended|other-paws #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc-by-4.0 #region-us \n# Dataset Card for Norwegian PAWS-X## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: NB AiLab\n- Repository: Norwegian PAWS-X Repository\n- Point of Contact: ai-lab@URL### Dataset Summary\n\nNorwegian PAWS-X is an extension of the PAWS-X dataset. PAWS-X is a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. The Norwegian PAWS-X dataset has machine-translated versions of the original PAWS-X dataset into Norwegian Bokmål and Nynorsk.### Languages\n\n- Norwegian Bokmål ('nb')\n- Norwegian Nynorsk ('nn')## Dataset Structure### Data Instances\n\nEach instance includes a pair of sentences in Norwegian along with a binary label indicating whether the sentences are paraphrases of each other."
] |
4030949b0360722d8853eb01d407393de0b40bad |
# Dataset Card for Indonesian Google Play Review
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Scrapped from e-commerce app on Google Play.
### Supported Tasks and Leaderboards
Sentiment Analysis
### Languages
Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | jakartaresearch/google-play-review | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"sentiment",
"google-play",
"indonesian",
"region:us"
] | 2022-08-06T04:00:32+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Indonesian Google Play Review", "tags": ["sentiment", "google-play", "indonesian"]} | 2022-08-06T15:24:49+00:00 | [] | [
"id"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #sentiment #google-play #indonesian #region-us
|
# Dataset Card for Indonesian Google Play Review
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Scrapped from e-commerce app on Google Play.
### Supported Tasks and Leaderboards
Sentiment Analysis
### Languages
Indonesian
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @andreaschandra for adding this dataset. | [
"# Dataset Card for Indonesian Google Play Review",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nScrapped from e-commerce app on Google Play.",
"### Supported Tasks and Leaderboards\n\nSentiment Analysis",
"### Languages\n\nIndonesian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #sentiment #google-play #indonesian #region-us \n",
"# Dataset Card for Indonesian Google Play Review",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nScrapped from e-commerce app on Google Play.",
"### Supported Tasks and Leaderboards\n\nSentiment Analysis",
"### Languages\n\nIndonesian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
99,
10,
125,
24,
17,
14,
6,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
17
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #sentiment #google-play #indonesian #region-us \n# Dataset Card for Indonesian Google Play Review## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nScrapped from e-commerce app on Google Play.### Supported Tasks and Leaderboards\n\nSentiment Analysis### Languages\n\nIndonesian## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] |
e758e7c5ea70be1fcfd0287c8a798ff91ff6e3d4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: L-macc/autotrain-Biomedical_sc_summ-1217846148
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@L-macc](https://huggingface.co/L-macc) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575677 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-06T07:22:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "L-macc/autotrain-Biomedical_sc_summ-1217846148", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-06T11:52:18+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: L-macc/autotrain-Biomedical_sc_summ-1217846148
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @L-macc for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846148\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @L-macc for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846148\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @L-macc for evaluating this model."
] | [
13,
112,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846148\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @L-macc for evaluating this model."
] |
53ad23a7638e94f869adadb1bad94c93d6de0854 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: L-macc/autotrain-Biomedical_sc_summ-1217846144
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@L-macc](https://huggingface.co/L-macc) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575678 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-06T07:23:10+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "L-macc/autotrain-Biomedical_sc_summ-1217846144", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-06T12:16:16+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: L-macc/autotrain-Biomedical_sc_summ-1217846144
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @L-macc for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846144\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @L-macc for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846144\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @L-macc for evaluating this model."
] | [
13,
112,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846144\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @L-macc for evaluating this model."
] |
3ea2191ea55e1d81f858bec4b51fb42cda713184 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: L-macc/autotrain-Biomedical_sc_summ-1217846142
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@L-macc](https://huggingface.co/L-macc) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-24db4c9a-12575679 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-06T07:23:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "L-macc/autotrain-Biomedical_sc_summ-1217846142", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-06T12:52:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: L-macc/autotrain-Biomedical_sc_summ-1217846142
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @L-macc for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846142\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @L-macc for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846142\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @L-macc for evaluating this model."
] | [
13,
112,
17
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: L-macc/autotrain-Biomedical_sc_summ-1217846142\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @L-macc for evaluating this model."
] |
2f911a890c1c1b9220100b4c83cfec52bc6cfe96 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_wip2
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-c887ce73-12585680 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-06T07:48:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/long_t5_global_large_pubmed_wip2", "metrics": [], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-08-06T15:29:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/long_t5_global_large_pubmed_wip2
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_wip2\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_wip2\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
112,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/long_t5_global_large_pubmed_wip2\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
6a30110f887edd7edbad033275aa853ddd8c4a26 |
# Dataset Card for CLEVR-Math
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:*https://github.com/dali-does/clevr-math*
- **Paper:*https://arxiv.org/abs/2208.05358*
- **Leaderboard:**
- **Point of Contact:*[email protected]*
### Dataset Summary
Dataset for compositional multimodal mathematical reasoning based on CLEVR.
#### Loading the data, preprocessing text with CLIP
```
from transformers import CLIPPreprocessor
from datasets import load_dataset, DownloadConfig
dl_config = DownloadConfig(resume_download=True,
num_proc=8,
force_download=True)
# Load 'general' instance of dataset
dataset = load_dataset('dali-does/clevr-math', download_config=dl_config)
# Load version with only multihop in test data
dataset_multihop = load_dataset('dali-does/clevr-math', 'multihop',
download_config=dl_config)
model_path = "openai/clip-vit-base-patch32"
extractor = CLIPProcessor.from_pretrained(model_path)
def transform_tokenize(e):
e['image'] = [image.convert('RGB') for image in e['image']]
return extractor(text=e['question'],
images=e['image'],
padding=True)
dataset = dataset.map(transform_tokenize,
batched=True,
num_proc=8,
padding='max_length')
dataset_subtraction = dataset.filter(lambda e:
e['template'].startswith('subtraction'), num_proc=4)
```
### Supported Tasks and Leaderboards
Leaderboard will be announced at a later date.
### Languages
The dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language.
## Dataset Structure
### Data Instances
* `general` containing the default version with multihop questions in train and test
* `multihop` containing multihop questions only in test data to test generalisation of reasoning
### Data Fields
```
features = datasets.Features(
{
"template": datasets.Value("string"),
"id": datasets.Value("string"),
"question": datasets.Value("string"),
"image": datasets.Image(),
"label": datasets.Value("int64")
}
)
```
### Data Splits
train/val/test
## Dataset Creation
Data is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators.
## Considerations for Using the Data
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Adam Dahlgren Lindström - [email protected]
### Licensing Information
Licensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0).
### Citation Information
[More Information Needed]
```
@misc{https://doi.org/10.48550/arxiv.2208.05358,
doi = {10.48550/ARXIV.2208.05358},
url = {https://arxiv.org/abs/2208.05358},
author = {Lindström, Adam Dahlgren and Abraham, Savitha Sam},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7; I.2.10; I.2.6; I.4.8; I.1.4},
title = {CLEVR-Math: A Dataset for Compositional Language, Visual, and Mathematical Reasoning},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
```
### Contributions
Thanks to [@dali-does](https://github.com/dali-does) for adding this dataset.
| dali-does/clevr-math | [
"task_categories:visual-question-answering",
"task_ids:visual-question-answering",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:clevr",
"language:en",
"license:cc-by-4.0",
"reasoning",
"neuro-symbolic",
"multimodal",
"arxiv:2208.05358",
"region:us"
] | 2022-08-06T11:09:39+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "source_datasets": ["clevr"], "task_categories": ["visual-question-answering"], "task_ids": ["visual-question-answering"], "pretty_name": "CLEVR-Math - Compositional language, visual, and mathematical reasoning", "tags": ["reasoning", "neuro-symbolic", "multimodal"]} | 2022-10-31T11:28:31+00:00 | [
"2208.05358"
] | [
"en"
] | TAGS
#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #source_datasets-clevr #language-English #license-cc-by-4.0 #reasoning #neuro-symbolic #multimodal #arxiv-2208.05358 #region-us
|
# Dataset Card for CLEVR-Math
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Considerations for Using the Data
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:*URL
- Paper:*URL
- Leaderboard:
- Point of Contact:*dali@URL*
### Dataset Summary
Dataset for compositional multimodal mathematical reasoning based on CLEVR.
#### Loading the data, preprocessing text with CLIP
### Supported Tasks and Leaderboards
Leaderboard will be announced at a later date.
### Languages
The dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language.
## Dataset Structure
### Data Instances
* 'general' containing the default version with multihop questions in train and test
* 'multihop' containing multihop questions only in test data to test generalisation of reasoning
### Data Fields
### Data Splits
train/val/test
## Dataset Creation
Data is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators.
## Considerations for Using the Data
### Other Known Limitations
## Additional Information
### Dataset Curators
Adam Dahlgren Lindström - dali@URL
### Licensing Information
Licensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0).
### Contributions
Thanks to @dali-does for adding this dataset.
| [
"# Dataset Card for CLEVR-Math",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:*URL\n- Paper:*URL\n- Leaderboard:\n- Point of Contact:*dali@URL*",
"### Dataset Summary\n\nDataset for compositional multimodal mathematical reasoning based on CLEVR.",
"#### Loading the data, preprocessing text with CLIP",
"### Supported Tasks and Leaderboards\n\nLeaderboard will be announced at a later date.",
"### Languages\n\nThe dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language.",
"## Dataset Structure",
"### Data Instances\n\n* 'general' containing the default version with multihop questions in train and test\n* 'multihop' containing multihop questions only in test data to test generalisation of reasoning",
"### Data Fields",
"### Data Splits\n\ntrain/val/test",
"## Dataset Creation\n\nData is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators.",
"## Considerations for Using the Data",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nAdam Dahlgren Lindström - dali@URL",
"### Licensing Information\n\nLicensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0).",
"### Contributions\n\nThanks to @dali-does for adding this dataset."
] | [
"TAGS\n#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #source_datasets-clevr #language-English #license-cc-by-4.0 #reasoning #neuro-symbolic #multimodal #arxiv-2208.05358 #region-us \n",
"# Dataset Card for CLEVR-Math",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:*URL\n- Paper:*URL\n- Leaderboard:\n- Point of Contact:*dali@URL*",
"### Dataset Summary\n\nDataset for compositional multimodal mathematical reasoning based on CLEVR.",
"#### Loading the data, preprocessing text with CLIP",
"### Supported Tasks and Leaderboards\n\nLeaderboard will be announced at a later date.",
"### Languages\n\nThe dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language.",
"## Dataset Structure",
"### Data Instances\n\n* 'general' containing the default version with multihop questions in train and test\n* 'multihop' containing multihop questions only in test data to test generalisation of reasoning",
"### Data Fields",
"### Data Splits\n\ntrain/val/test",
"## Dataset Creation\n\nData is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators.",
"## Considerations for Using the Data",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nAdam Dahlgren Lindström - dali@URL",
"### Licensing Information\n\nLicensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0).",
"### Contributions\n\nThanks to @dali-does for adding this dataset."
] | [
111,
10,
92,
33,
25,
13,
20,
40,
6,
44,
5,
10,
35,
8,
7,
5,
15,
23,
18
] | [
"passage: TAGS\n#task_categories-visual-question-answering #task_ids-visual-question-answering #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #source_datasets-clevr #language-English #license-cc-by-4.0 #reasoning #neuro-symbolic #multimodal #arxiv-2208.05358 #region-us \n# Dataset Card for CLEVR-Math## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:*URL\n- Paper:*URL\n- Leaderboard:\n- Point of Contact:*dali@URL*### Dataset Summary\n\nDataset for compositional multimodal mathematical reasoning based on CLEVR.#### Loading the data, preprocessing text with CLIP### Supported Tasks and Leaderboards\n\nLeaderboard will be announced at a later date.### Languages\n\nThe dataset is currently only available in English. To extend the dataset to other languages, the CLEVR templates must be rewritten in the target language.## Dataset Structure### Data Instances\n\n* 'general' containing the default version with multihop questions in train and test\n* 'multihop' containing multihop questions only in test data to test generalisation of reasoning### Data Fields### Data Splits\n\ntrain/val/test## Dataset Creation\n\nData is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators.## Considerations for Using the Data### Other Known Limitations## Additional Information### Dataset Curators\n\nAdam Dahlgren Lindström - dali@URL### Licensing Information\n\nLicensed under Creative Commons Attribution Share Alike 4.0 International (CC-by 4.0)."
] |
2c628097f293a86bdba429379dbb91c0952415eb |
# Oxford-IIIT Pet Dataset
Images from [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/). Only images and labels have been pushed, segmentation annotations were ignored.
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/pets/
License:
Same as the original dataset.
| pcuenq/oxford-pets | [
"task_categories:image-classification",
"source_datasets:https://www.robots.ox.ac.uk/~vgg/data/pets/",
"license:cc-by-sa-4.0",
"pets",
"oxford",
"region:us"
] | 2022-08-06T14:59:02+00:00 | {"license": "cc-by-sa-4.0", "source_datasets": "https://www.robots.ox.ac.uk/~vgg/data/pets/", "task_categories": ["image-classification"], "pretty_name": "Oxford-IIIT Pet Dataset (no annotations)", "tags": ["pets", "oxford"], "license_details": "https://www.robots.ox.ac.uk/~vgg/data/pets/"} | 2022-08-06T15:01:34+00:00 | [] | [] | TAGS
#task_categories-image-classification #source_datasets-https-//www.robots.ox.ac.uk/~vgg/data/pets/ #license-cc-by-sa-4.0 #pets #oxford #region-us
|
# Oxford-IIIT Pet Dataset
Images from The Oxford-IIIT Pet Dataset. Only images and labels have been pushed, segmentation annotations were ignored.
- Homepage: URL
License:
Same as the original dataset.
| [
"# Oxford-IIIT Pet Dataset\n\nImages from The Oxford-IIIT Pet Dataset. Only images and labels have been pushed, segmentation annotations were ignored.\n\n- Homepage: URL\n\nLicense:\nSame as the original dataset."
] | [
"TAGS\n#task_categories-image-classification #source_datasets-https-//www.robots.ox.ac.uk/~vgg/data/pets/ #license-cc-by-sa-4.0 #pets #oxford #region-us \n",
"# Oxford-IIIT Pet Dataset\n\nImages from The Oxford-IIIT Pet Dataset. Only images and labels have been pushed, segmentation annotations were ignored.\n\n- Homepage: URL\n\nLicense:\nSame as the original dataset."
] | [
64,
51
] | [
"passage: TAGS\n#task_categories-image-classification #source_datasets-https-//www.robots.ox.ac.uk/~vgg/data/pets/ #license-cc-by-sa-4.0 #pets #oxford #region-us \n# Oxford-IIIT Pet Dataset\n\nImages from The Oxford-IIIT Pet Dataset. Only images and labels have been pushed, segmentation annotations were ignored.\n\n- Homepage: URL\n\nLicense:\nSame as the original dataset."
] |
d598048f46b2f7796dcf3f29f969dd53114d13af | EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Lydgate, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
There is mild standardization, but this dataset reflects the spelling inconsistencies characteristic of Middle English.
| Qilex/EN-MEspecialChars | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:10K<n<100K",
"language:en",
"language:me",
"license:afl-3.0",
"middle english",
"region:us"
] | 2022-08-06T20:12:52+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en", "me"], "license": ["afl-3.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["translation"], "task_ids": [], "pretty_name": "EN-MEspecialChars", "tags": ["middle english"]} | 2022-08-06T20:38:43+00:00 | [] | [
"en",
"me"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-10K<n<100K #language-English #language-me #license-afl-3.0 #middle english #region-us
| EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Lydgate, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
There is mild standardization, but this dataset reflects the spelling inconsistencies characteristic of Middle English.
| [] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10K<n<100K #language-English #language-me #license-afl-3.0 #middle english #region-us \n"
] | [
54
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10K<n<100K #language-English #language-me #license-afl-3.0 #middle english #region-us \n"
] |
b839c6ac6fc3fbf9ed2c3926433196b35f72afb9 | Dataset with sentences regarding professions, half of the translations are to feminine and half for masculine sentences.
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/handmade-dataset", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 388
})
})
```
Exemple:
```
remote_dataset["train"][5]
```
Output:
```
{'id': '5',
'translation': {'english': 'the postman finished her work .',
'portuguese': 'A carteira terminou seu trabalho .'}}
``` | VanessaSchenkel/handmade-dataset | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:pt",
"license:afl-3.0",
"region:us"
] | 2022-08-06T21:02:15+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "pt"], "license": ["afl-3.0"], "multilinguality": ["translation"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "VanessaSchenkel/handmade-dataset", "tags": []} | 2022-08-06T21:11:34+00:00 | [] | [
"en",
"pt"
] | TAGS
#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-n<1K #source_datasets-original #language-English #language-Portuguese #license-afl-3.0 #region-us
| Dataset with sentences regarding professions, half of the translations are to feminine and half for masculine sentences.
How to use it:
Output:
Exemple:
Output:
| [] | [
"TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-n<1K #source_datasets-original #language-English #language-Portuguese #license-afl-3.0 #region-us \n"
] | [
76
] | [
"passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-n<1K #source_datasets-original #language-English #language-Portuguese #license-afl-3.0 #region-us \n"
] |
7dd7ea5bc04520e2d01b963a15830ebff6e5db4b |
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/opus_books_en_pt", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 1404
})
})
```
Exemple:
```
remote_dataset["train"][5]
```
Output:
```
{'id': '5',
'translation': {'en': "There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, 'Oh dear!",
'pt': 'Não havia nada de tão extraordinário nisso; nem Alice achou assim tão fora do normal ouvir o Coelho dizer para si mesmo: —"Oh, céus!'}}
``` | VanessaSchenkel/opus_books_en_pt | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:extended|opus_books",
"language:en",
"language:pt",
"license:afl-3.0",
"region:us"
] | 2022-08-06T21:34:58+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "pt"], "license": ["afl-3.0"], "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|opus_books"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "VanessaSchenkel/opus_books_en_pt", "tags": []} | 2022-08-06T21:46:10+00:00 | [] | [
"en",
"pt"
] | TAGS
#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-extended|opus_books #language-English #language-Portuguese #license-afl-3.0 #region-us
|
How to use it:
Output:
Exemple:
Output:
| [] | [
"TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-extended|opus_books #language-English #language-Portuguese #license-afl-3.0 #region-us \n"
] | [
85
] | [
"passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-1K<n<10K #source_datasets-extended|opus_books #language-English #language-Portuguese #license-afl-3.0 #region-us \n"
] |
d628ab354f86c439b1eb1db39b3dc6cde6497346 |
# Indonesian News Categorization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | jakartaresearch/indonews | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"news",
"news-classifcation",
"indonesia",
"region:us"
] | 2022-08-07T03:03:02+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Indonews", "tags": ["news", "news-classifcation", "indonesia"]} | 2022-08-07T03:27:54+00:00 | [] | [
"id"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #news #news-classifcation #indonesia #region-us
|
# Indonesian News Categorization
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @andreaschandra for adding this dataset. | [
"# Indonesian News Categorization",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nIndonews: Multiclass News Categorization scrapped popular news portals in Indonesia.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #news #news-classifcation #indonesia #region-us \n",
"# Indonesian News Categorization",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nIndonews: Multiclass News Categorization scrapped popular news portals in Indonesia.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
101,
8,
125,
24,
25,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
17
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #news #news-classifcation #indonesia #region-us \n# Indonesian News Categorization## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nIndonews: Multiclass News Categorization scrapped popular news portals in Indonesia.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] |
c73fd7730502cf3694ce5072b899b6ee6ac2bebf |
# Dataset Card for Poem Tweets
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data are from Twitter. The purpose of this data is to create text generation model for short text and make sure they are all coherence and rhythmic
### Supported Tasks and Leaderboards
- Text Generation
- Language Model
### Languages
Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | jakartaresearch/poem-tweets | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"poem",
"tweets",
"twitter",
"indonesian",
"region:us"
] | 2022-08-07T06:01:00+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "poem_tweets", "tags": ["poem", "tweets", "twitter", "indonesian"]} | 2022-08-07T07:54:18+00:00 | [] | [
"id"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #poem #tweets #twitter #indonesian #region-us
|
# Dataset Card for Poem Tweets
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The data are from Twitter. The purpose of this data is to create text generation model for short text and make sure they are all coherence and rhythmic
### Supported Tasks and Leaderboards
- Text Generation
- Language Model
### Languages
Indonesian
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @andreaschandra for adding this dataset. | [
"# Dataset Card for Poem Tweets",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe data are from Twitter. The purpose of this data is to create text generation model for short text and make sure they are all coherence and rhythmic",
"### Supported Tasks and Leaderboards\n\n- Text Generation\n- Language Model",
"### Languages\n\nIndonesian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #poem #tweets #twitter #indonesian #region-us \n",
"# Dataset Card for Poem Tweets",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe data are from Twitter. The purpose of this data is to create text generation model for short text and make sure they are all coherence and rhythmic",
"### Supported Tasks and Leaderboards\n\n- Text Generation\n- Language Model",
"### Languages\n\nIndonesian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
102,
9,
125,
24,
39,
16,
6,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
17
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #poem #tweets #twitter #indonesian #region-us \n# Dataset Card for Poem Tweets## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe data are from Twitter. The purpose of this data is to create text generation model for short text and make sure they are all coherence and rhythmic### Supported Tasks and Leaderboards\n\n- Text Generation\n- Language Model### Languages\n\nIndonesian## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] |
39fada6cd52f101ea2731456009bc6469b4d5a78 |
### Dataset Summary
KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using [ungoliant](https://github.com/oscar-corpus/ungoliant), each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
### Preprocessing
Each folder name inside snapshots folder denoted preprocessing technique that has been applied .
- **Raw**
- this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)
- use same "raw cc snapshot" for `2021_10` and `2021_49` which can be found in oscar dataset ([2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/tree/main/packaged_nondedup/id) and [2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201/tree/main/compressed/id_meta))
- **Dedup**
- use data from raw folder
- apply cleaning techniques for every text in documents such as
- fix html
- remove noisy unicode
- fix news tag
- remove control char
- filter by removing short text (20 words)
- filter by character ratio occurred inside text such as
- min_alphabet_ratio (0.75)
- max_upper_ratio (0.10)
- max_number_ratio(0.05)
- filter by exact dedup technique
- hash all text with md5 hashlib
- remove non-unique hash
- full code about dedup step adapted from [here](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned/tree/main)
- **Neardup**
- use data from dedup folder
- create index cluster using neardup [Minhash and LSH](http://ekzhu.com/datasketch/lsh.html) with following config :
- use 128 permuation
- 6 n-grams size
- use word tokenization (split sentence by space)
- use 0.8 as similarity score
- fillter by removing all index from cluster
- full code about neardup step adapted from [here](https://github.com/ChenghaoMou/text-dedup)
- **Neardup_clean**
- use data from neardup folder
- Removing documents containing words from a selection of the [Indonesian Bad Words](https://github.com/acul3/c4_id_processed/blob/67e10c086d43152788549ef05b7f09060e769993/clean/badwords_ennl.py#L64).
- Removing sentences containing:
- Less than 3 words.
- A word longer than 1000 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in indonesia
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- full code about neardup_clean step adapted from [here](https://gitlab.com/yhavinga/c4nlpreproc)
## Dataset Structure
### Data Instances
An example from the dataset:
```
{'text': 'Panitia Kerja (Panja) pembahasan RUU Cipta Kerja (Ciptaker) DPR RI memastikan naskah UU Ciptaker sudah final, tapi masih dalam penyisiran. Penyisiran dilakukan agar isi UU Ciptaker sesuai dengan kesepakatan dalam pembahasan dan tidak ada salah pengetikan (typo).\n"Kan memang sudah diumumkan, naskah final itu sudah. Cuma kita sekarang … DPR itu kan punya waktu 7 hari sebelum naskah resminya kita kirim ke pemerintah. Nah, sekarang itu kita sisir, jangan sampai ada yang salah pengetikan, tapi tidak mengubah substansi," kata Ketua Panja RUU Ciptaker Supratman Andi Agtas saat berbincang dengan detikcom, Jumat (9/10/2020) pukul 10.56 WIB.\nSupratman mengungkapkan Panja RUU Ciptaker menggelar rapat hari ini untuk melakukan penyisiran terhadap naskah UU Ciptaker. Panja, sebut dia, bekerja sama dengan pemerintah dan ahli bahasa untuk melakukan penyisiran naskah.\n"Sebentar, siang saya undang seluruh poksi-poksi (kelompok fraksi) Baleg (Badan Legislasi DPR), anggota Panja itu datang ke Baleg untuk melihat satu per satu, jangan sampai …. Karena kan sekarang ini tim dapur pemerintah dan DPR lagi bekerja bersama dengan ahli bahasa melihat jangan sampai ada yang typo, redundant," terangnya.\nSupratman membenarkan bahwa naskah UU Ciptaker yang final itu sudah beredar. Ketua Baleg DPR itu memastikan penyisiran yang dilakukan tidak mengubah substansi setiap pasal yang telah melalui proses pembahasan.\n"Itu yang sudah dibagikan. Tapi kan itu substansinya yang tidak mungkin akan berubah. Nah, kita pastikan nih dari sisi drafting-nya yang jadi kita pastikan," tutur Supratman.\nLebih lanjut Supratman menjelaskan DPR memiliki waktu 7 hari untuk melakukan penyisiran. Anggota DPR dari Fraksi Gerindra itu memastikan paling lambat Selasa (13/10) pekan depan, naskah UU Ciptaker sudah bisa diakses oleh masyarakat melalui situs DPR.\n"Kita itu, DPR, punya waktu sampai 7 hari kerja. Jadi harusnya hari Selasa sudah final semua, paling lambat. Tapi saya usahakan hari ini bisa final. Kalau sudah final, semua itu langsung bisa diakses di web DPR," terang Supratman.\nDiberitakan sebelumnya, Wakil Ketua Baleg DPR Achmad Baidowi mengakui naskah UU Ciptaker yang telah disahkan di paripurna DPR masih dalam proses pengecekan untuk menghindari kesalahan pengetikan. Anggota Komisi VI DPR itu menyinggung soal salah ketik dalam revisi UU KPK yang disahkan pada 2019.\n"Mengoreksi yang typo itu boleh, asalkan tidak mengubah substansi. Jangan sampai seperti tahun lalu, ada UU salah ketik soal umur \'50 (empat puluh)\', sehingga pemerintah harus mengonfirmasi lagi ke DPR," ucap Baidowi, Kamis (8/10).',
'url': 'https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726',
'timestamp': '2021-10-22T04:09:47Z',
'meta': '{"warc_headers": {"content-length": "2747", "content-type": "text/plain", "warc-date": "2021-10-22T04:09:47Z", "warc-record-id": "<urn:uuid:a5b2cc09-bd2b-4d0e-9e5b-2fcc5fce47cb>", "warc-identified-content-language": "ind,eng", "warc-target-uri": "https://news.detik.com/berita/d-5206925/baleg-dpr-naskah-final-uu-ciptaker-sedang-diperbaiki-tanpa-ubah-substansi?tag_from=wp_cb_mostPopular_list&_ga=2.71339034.848625040.1602222726-629985507.1602222726", "warc-block-digest": "sha1:65AWBDBLS74AGDCGDBNDHBHADOKSXCKV", "warc-type": "conversion", "warc-refers-to": "<urn:uuid:b7ceadba-7120-4e38-927c-a50db21f0d4f>"}, "identification": {"label": "id", "prob": 0.6240405}, "annotations": null, "line_identifications": [null, {"label": "id", "prob": 0.9043896}, null, null, {"label": "id", "prob": 0.87111086}, {"label": "id", "prob": 0.9095224}, {"label": "id", "prob": 0.8579232}, {"label": "id", "prob": 0.81366056}, {"label": "id", "prob": 0.9286813}, {"label": "id", "prob": 0.8435194}, {"label": "id", "prob": 0.8387821}, null]}'}
```
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
- `meta` : json representation of the original from ungoliant tools,can be found [here](https://oscar-corpus.com/post/oscar-v22-01/) (warc_heder)
## Additional Information
### Dataset Curators
For inquiries or requests regarding the KoPI-CC contained in this repository, please contact me at [[email protected]](mailto:[email protected])
### Licensing Information
These data are released under this licensing scheme
I do not own any of the text from which these data has been extracted.
the license actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
Should you consider that data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
I will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@ARTICLE{2022arXiv220106642A,
author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = jan,
eid = {arXiv:2201.06642},
pages = {arXiv:2201.06642},
archivePrefix = {arXiv},
eprint = {2201.06642},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
``` | acul3/KoPI-CC | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:id",
"license:cc",
"arxiv:2201.06642",
"region:us"
] | 2022-08-07T12:04:52+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": "cc", "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "oscar"} | 2023-03-03T08:14:38+00:00 | [
"2201.06642"
] | [
"id"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Indonesian #license-cc #arxiv-2201.06642 #region-us
|
### Dataset Summary
KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using ungoliant, each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
### Preprocessing
Each folder name inside snapshots folder denoted preprocessing technique that has been applied .
- Raw
- this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)
- use same "raw cc snapshot" for '2021_10' and '2021_49' which can be found in oscar dataset (2109 and 2201)
- Dedup
- use data from raw folder
- apply cleaning techniques for every text in documents such as
- fix html
- remove noisy unicode
- fix news tag
- remove control char
- filter by removing short text (20 words)
- filter by character ratio occurred inside text such as
- min_alphabet_ratio (0.75)
- max_upper_ratio (0.10)
- max_number_ratio(0.05)
- filter by exact dedup technique
- hash all text with md5 hashlib
- remove non-unique hash
- full code about dedup step adapted from here
- Neardup
- use data from dedup folder
- create index cluster using neardup Minhash and LSH with following config :
- use 128 permuation
- 6 n-grams size
- use word tokenization (split sentence by space)
- use 0.8 as similarity score
- fillter by removing all index from cluster
- full code about neardup step adapted from here
- Neardup_clean
- use data from neardup folder
- Removing documents containing words from a selection of the Indonesian Bad Words.
- Removing sentences containing:
- Less than 3 words.
- A word longer than 1000 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. '{'), lorem ipsum, policy information in indonesia
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- full code about neardup_clean step adapted from here
## Dataset Structure
### Data Instances
An example from the dataset:
### Data Fields
The data contains the following fields:
- 'url': url of the source as a string
- 'text': text content as a string
- 'timestamp': timestamp of extraction as a string
- 'meta' : json representation of the original from ungoliant tools,can be found here (warc_heder)
## Additional Information
### Dataset Curators
For inquiries or requests regarding the KoPI-CC contained in this repository, please contact me at samsulrahmadani@URL
### Licensing Information
These data are released under this licensing scheme
I do not own any of the text from which these data has been extracted.
the license actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") URL
Should you consider that data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
I will comply to legitimate requests by removing the affected sources from the next release of the corpus.
| [
"### Dataset Summary\n\nKoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using ungoliant, each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup",
"### Preprocessing\n\nEach folder name inside snapshots folder denoted preprocessing technique that has been applied .\n\n - Raw\n\t - this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)\n\t - use same \"raw cc snapshot\" for '2021_10' and '2021_49' which can be found in oscar dataset (2109 and 2201)\n - Dedup\n\t - use data from raw folder\n\t - apply cleaning techniques for every text in documents such as \n\t\t - fix html\n\t\t - remove noisy unicode\n\t\t - fix news tag\n\t\t - remove control char\n\t - filter by removing short text (20 words)\n\t - filter by character ratio occurred inside text such as\n\t\t - min_alphabet_ratio (0.75)\n\t\t - max_upper_ratio (0.10)\n\t\t - max_number_ratio(0.05)\n\t\n\t - filter by exact dedup technique\n\t\t - hash all text with md5 hashlib\n\t\t - remove non-unique hash\n\t - full code about dedup step adapted from here\n - Neardup\n\t - use data from dedup folder\n\t\n\t - create index cluster using neardup Minhash and LSH with following config :\n\t\t - use 128 permuation\n\t\t - 6 n-grams size\n\t\t - use word tokenization (split sentence by space)\n\t\t - use 0.8 as similarity score\n\t\n\t - fillter by removing all index from cluster\n\t - full code about neardup step adapted from here\n - Neardup_clean\n\t - use data from neardup folder\n\t - Removing documents containing words from a selection of the Indonesian Bad Words.\n\t\n \n\t- Removing sentences containing:\n\t \n\t - Less than 3 words.\n\t \n\t - A word longer than 1000 characters.\n\t \n\t - An end symbol not matching end-of-sentence punctuation.\n\t \n\t - Strings associated to javascript code (e.g. '{'), lorem ipsum, policy information in indonesia\n \n\t- Removing documents (after sentence filtering):\n\t \n\t - Containing less than 5 sentences.\n\t \n\t - Containing less than 500 or more than 50'000 characters.\n\t - full code about neardup_clean step adapted from here",
"## Dataset Structure",
"### Data Instances\n\nAn example from the dataset:",
"### Data Fields\nThe data contains the following fields:\n- 'url': url of the source as a string\n- 'text': text content as a string\n- 'timestamp': timestamp of extraction as a string\n- 'meta' : json representation of the original from ungoliant tools,can be found here (warc_heder)",
"## Additional Information",
"### Dataset Curators\nFor inquiries or requests regarding the KoPI-CC contained in this repository, please contact me at samsulrahmadani@URL",
"### Licensing Information\n These data are released under this licensing scheme\n I do not own any of the text from which these data has been extracted.\n \tthe license actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n Should you consider that data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n I will comply to legitimate requests by removing the affected sources from the next release of the corpus."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Indonesian #license-cc #arxiv-2201.06642 #region-us \n",
"### Dataset Summary\n\nKoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using ungoliant, each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup",
"### Preprocessing\n\nEach folder name inside snapshots folder denoted preprocessing technique that has been applied .\n\n - Raw\n\t - this processed directly from cc snapshot using ungoliant without any addition filter ,you can read it in their paper (citation below)\n\t - use same \"raw cc snapshot\" for '2021_10' and '2021_49' which can be found in oscar dataset (2109 and 2201)\n - Dedup\n\t - use data from raw folder\n\t - apply cleaning techniques for every text in documents such as \n\t\t - fix html\n\t\t - remove noisy unicode\n\t\t - fix news tag\n\t\t - remove control char\n\t - filter by removing short text (20 words)\n\t - filter by character ratio occurred inside text such as\n\t\t - min_alphabet_ratio (0.75)\n\t\t - max_upper_ratio (0.10)\n\t\t - max_number_ratio(0.05)\n\t\n\t - filter by exact dedup technique\n\t\t - hash all text with md5 hashlib\n\t\t - remove non-unique hash\n\t - full code about dedup step adapted from here\n - Neardup\n\t - use data from dedup folder\n\t\n\t - create index cluster using neardup Minhash and LSH with following config :\n\t\t - use 128 permuation\n\t\t - 6 n-grams size\n\t\t - use word tokenization (split sentence by space)\n\t\t - use 0.8 as similarity score\n\t\n\t - fillter by removing all index from cluster\n\t - full code about neardup step adapted from here\n - Neardup_clean\n\t - use data from neardup folder\n\t - Removing documents containing words from a selection of the Indonesian Bad Words.\n\t\n \n\t- Removing sentences containing:\n\t \n\t - Less than 3 words.\n\t \n\t - A word longer than 1000 characters.\n\t \n\t - An end symbol not matching end-of-sentence punctuation.\n\t \n\t - Strings associated to javascript code (e.g. '{'), lorem ipsum, policy information in indonesia\n \n\t- Removing documents (after sentence filtering):\n\t \n\t - Containing less than 5 sentences.\n\t \n\t - Containing less than 500 or more than 50'000 characters.\n\t - full code about neardup_clean step adapted from here",
"## Dataset Structure",
"### Data Instances\n\nAn example from the dataset:",
"### Data Fields\nThe data contains the following fields:\n- 'url': url of the source as a string\n- 'text': text content as a string\n- 'timestamp': timestamp of extraction as a string\n- 'meta' : json representation of the original from ungoliant tools,can be found here (warc_heder)",
"## Additional Information",
"### Dataset Curators\nFor inquiries or requests regarding the KoPI-CC contained in this repository, please contact me at samsulrahmadani@URL",
"### Licensing Information\n These data are released under this licensing scheme\n I do not own any of the text from which these data has been extracted.\n \tthe license actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n Should you consider that data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n I will comply to legitimate requests by removing the affected sources from the next release of the corpus."
] | [
83,
71,
462,
6,
13,
81,
5,
38,
175
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Indonesian #license-cc #arxiv-2201.06642 #region-us \n### Dataset Summary\n\nKoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using ungoliant, each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup"
] |
64c7044457dd130ae6db88a0a5c386a1c1a6249e |
### Dataset description
33.000 transcribed text lines from historical newspapers (before 1878) along with the cropped images of the original scans
Text line based OCR
19.000 text lines in Antiqua
14.000 text lines in Fraktur
Transcribed using double-keying (99.95% accuracy)
Public Domain, CC0 (See copyright notice)
Best for training an OCR engine
The newspapers used are:
- Le Gratis luxembourgeois (1857-1858)
- Luxemburger Volks-Freund (1869-1876)
- L'Arlequin (1848-1848)
- Courrier du Grand-Duché de Luxembourg (1844-1868)
- L'Avenir (1868-1871)
- Der Wächter an der Sauer (1849-1869)
- Luxemburger Zeitung (1844-1845)
- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)
- Der Volksfreund (1848-1849)
- Cäcilia (1862-1871)
- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1878)
- L'Indépendance luxembourgeoise (1871-1878)
- Luxemburger Anzeiger (1856)
- L'Union (1860-1871)
- Diekircher Wochenblatt (1837-1848)
- Das Vaterland (1869-1870)
- D'Wäschfra (1868-1878)
- Luxemburger Bauernzeitung (1857)
- Luxemburger Wort (1848-1878)
### URL for this dataset
https://data.bnl.lu/data/historical-newspapers/
### Dataset format
Two JSONL files (antiqua.jsonl.gz and fraktur.jsonl.gz) with the follwing fields:
- `font` is either antiqua or fraktur
- `img` is the filename of the associated image for the text
- `text` is the handcorrected double-keyed text transcribed from the image
Sample:
```json
{
"font": "fraktur",
"img": "fraktur-000011.png",
"text": "Vidal die Vollmacht für Paris an. Auch"
}
```
In addition there are two `.zip` files with the associated images
### Dataset modality
Text and associated Images from Scans
### Dataset licence
Creative Commons Public Domain Dedication and Certification
### size of dataset
500MB-2GB
### Contact details for data custodian
[email protected]
| biglam/bnl_ground_truth_newspapers_before_1878 | [
"license:cc0-1.0",
"region:us"
] | 2022-08-07T12:13:39+00:00 | {"license": "cc0-1.0"} | 2022-08-07T12:16:10+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
|
### Dataset description
33.000 transcribed text lines from historical newspapers (before 1878) along with the cropped images of the original scans
Text line based OCR
19.000 text lines in Antiqua
14.000 text lines in Fraktur
Transcribed using double-keying (99.95% accuracy)
Public Domain, CC0 (See copyright notice)
Best for training an OCR engine
The newspapers used are:
- Le Gratis luxembourgeois (1857-1858)
- Luxemburger Volks-Freund (1869-1876)
- L'Arlequin (1848-1848)
- Courrier du Grand-Duché de Luxembourg (1844-1868)
- L'Avenir (1868-1871)
- Der Wächter an der Sauer (1849-1869)
- Luxemburger Zeitung (1844-1845)
- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)
- Der Volksfreund (1848-1849)
- Cäcilia (1862-1871)
- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1878)
- L'Indépendance luxembourgeoise (1871-1878)
- Luxemburger Anzeiger (1856)
- L'Union (1860-1871)
- Diekircher Wochenblatt (1837-1848)
- Das Vaterland (1869-1870)
- D'Wäschfra (1868-1878)
- Luxemburger Bauernzeitung (1857)
- Luxemburger Wort (1848-1878)
### URL for this dataset
URL
### Dataset format
Two JSONL files (URL and URL) with the follwing fields:
- 'font' is either antiqua or fraktur
- 'img' is the filename of the associated image for the text
- 'text' is the handcorrected double-keyed text transcribed from the image
Sample:
In addition there are two '.zip' files with the associated images
### Dataset modality
Text and associated Images from Scans
### Dataset licence
Creative Commons Public Domain Dedication and Certification
### size of dataset
500MB-2GB
### Contact details for data custodian
opendata@URL
| [
"### Dataset description\n\n33.000 transcribed text lines from historical newspapers (before 1878) along with the cropped images of the original scans\n\nText line based OCR\n19.000 text lines in Antiqua\n14.000 text lines in Fraktur\nTranscribed using double-keying (99.95% accuracy)\nPublic Domain, CC0 (See copyright notice)\nBest for training an OCR engine\n\nThe newspapers used are:\n- Le Gratis luxembourgeois (1857-1858)\n- Luxemburger Volks-Freund (1869-1876)\n- L'Arlequin (1848-1848)\n- Courrier du Grand-Duché de Luxembourg (1844-1868)\n- L'Avenir (1868-1871)\n- Der Wächter an der Sauer (1849-1869)\n- Luxemburger Zeitung (1844-1845)\n- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)\n- Der Volksfreund (1848-1849)\n- Cäcilia (1862-1871)\n- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1878)\n- L'Indépendance luxembourgeoise (1871-1878)\n- Luxemburger Anzeiger (1856)\n- L'Union (1860-1871)\n- Diekircher Wochenblatt (1837-1848)\n- Das Vaterland (1869-1870)\n- D'Wäschfra (1868-1878)\n- Luxemburger Bauernzeitung (1857)\n- Luxemburger Wort (1848-1878)",
"### URL for this dataset\n\nURL",
"### Dataset format\n\nTwo JSONL files (URL and URL) with the follwing fields:\n- 'font' is either antiqua or fraktur\n- 'img' is the filename of the associated image for the text\n- 'text' is the handcorrected double-keyed text transcribed from the image\n\nSample:\n\n\nIn addition there are two '.zip' files with the associated images",
"### Dataset modality\n\nText and associated Images from Scans",
"### Dataset licence\n\nCreative Commons Public Domain Dedication and Certification",
"### size of dataset\n\n500MB-2GB",
"### Contact details for data custodian\n\nopendata@URL"
] | [
"TAGS\n#license-cc0-1.0 #region-us \n",
"### Dataset description\n\n33.000 transcribed text lines from historical newspapers (before 1878) along with the cropped images of the original scans\n\nText line based OCR\n19.000 text lines in Antiqua\n14.000 text lines in Fraktur\nTranscribed using double-keying (99.95% accuracy)\nPublic Domain, CC0 (See copyright notice)\nBest for training an OCR engine\n\nThe newspapers used are:\n- Le Gratis luxembourgeois (1857-1858)\n- Luxemburger Volks-Freund (1869-1876)\n- L'Arlequin (1848-1848)\n- Courrier du Grand-Duché de Luxembourg (1844-1868)\n- L'Avenir (1868-1871)\n- Der Wächter an der Sauer (1849-1869)\n- Luxemburger Zeitung (1844-1845)\n- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)\n- Der Volksfreund (1848-1849)\n- Cäcilia (1862-1871)\n- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1878)\n- L'Indépendance luxembourgeoise (1871-1878)\n- Luxemburger Anzeiger (1856)\n- L'Union (1860-1871)\n- Diekircher Wochenblatt (1837-1848)\n- Das Vaterland (1869-1870)\n- D'Wäschfra (1868-1878)\n- Luxemburger Bauernzeitung (1857)\n- Luxemburger Wort (1848-1878)",
"### URL for this dataset\n\nURL",
"### Dataset format\n\nTwo JSONL files (URL and URL) with the follwing fields:\n- 'font' is either antiqua or fraktur\n- 'img' is the filename of the associated image for the text\n- 'text' is the handcorrected double-keyed text transcribed from the image\n\nSample:\n\n\nIn addition there are two '.zip' files with the associated images",
"### Dataset modality\n\nText and associated Images from Scans",
"### Dataset licence\n\nCreative Commons Public Domain Dedication and Certification",
"### size of dataset\n\n500MB-2GB",
"### Contact details for data custodian\n\nopendata@URL"
] | [
14,
292,
8,
87,
13,
15,
10,
12
] | [
"passage: TAGS\n#license-cc0-1.0 #region-us \n### Dataset description\n\n33.000 transcribed text lines from historical newspapers (before 1878) along with the cropped images of the original scans\n\nText line based OCR\n19.000 text lines in Antiqua\n14.000 text lines in Fraktur\nTranscribed using double-keying (99.95% accuracy)\nPublic Domain, CC0 (See copyright notice)\nBest for training an OCR engine\n\nThe newspapers used are:\n- Le Gratis luxembourgeois (1857-1858)\n- Luxemburger Volks-Freund (1869-1876)\n- L'Arlequin (1848-1848)\n- Courrier du Grand-Duché de Luxembourg (1844-1868)\n- L'Avenir (1868-1871)\n- Der Wächter an der Sauer (1849-1869)\n- Luxemburger Zeitung (1844-1845)\n- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)\n- Der Volksfreund (1848-1849)\n- Cäcilia (1862-1871)\n- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1878)\n- L'Indépendance luxembourgeoise (1871-1878)\n- Luxemburger Anzeiger (1856)\n- L'Union (1860-1871)\n- Diekircher Wochenblatt (1837-1848)\n- Das Vaterland (1869-1870)\n- D'Wäschfra (1868-1878)\n- Luxemburger Bauernzeitung (1857)\n- Luxemburger Wort (1848-1878)### URL for this dataset\n\nURL### Dataset format\n\nTwo JSONL files (URL and URL) with the follwing fields:\n- 'font' is either antiqua or fraktur\n- 'img' is the filename of the associated image for the text\n- 'text' is the handcorrected double-keyed text transcribed from the image\n\nSample:\n\n\nIn addition there are two '.zip' files with the associated images### Dataset modality\n\nText and associated Images from Scans### Dataset licence\n\nCreative Commons Public Domain Dedication and Certification### size of dataset\n\n500MB-2GB### Contact details for data custodian\n\nopendata@URL"
] |
2f37090fe26d8da9b59f8403426fa17c69a9f157 | This dataset contains ATC communication.
It can be used to fine tune an **ASR** model, specialised for Air Traffic Control Communications (ATC)
Its data have been taken from the [ATCO2 site](https://www.atco2.org/data) | luigisaetta/atco2 | [
"region:us"
] | 2022-08-07T12:27:14+00:00 | {} | 2022-08-29T06:36:28+00:00 | [] | [] | TAGS
#region-us
| This dataset contains ATC communication.
It can be used to fine tune an ASR model, specialised for Air Traffic Control Communications (ATC)
Its data have been taken from the ATCO2 site | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
704867178079f256151dc7d561bb241083f3c0de | # AutoTrain Dataset for project: provision_classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project provision_classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Each Partner hereby represents and warrants to the Partnership and each other Partner that (a)\u00a0if such Partner is a corporation, it is duly organized, validly existing, and in good standing under the laws of the jurisdiction of its incorporation and is duly qualified and in good standing as a foreign corporation in the jurisdiction of its principal place of business (if not incorporated therein), (b) if such Partner is a trust, estate or other entity, it is duly formed, validly existing, and (if applicable) in good standing under the laws of the jurisdiction of its formation, and if required by law is duly qualified to do business and (if applicable) in good standing in the jurisdiction of its principal place of business (if not formed therein), (c) such Partner has full corporate, trust, or other applicable right, power and authority to enter into this Agreement and to perform its obligations hereunder and all necessary actions by the board of directors, trustees, beneficiaries, or other Persons necessary for the due authorization, execution, delivery, and performance of this Agreement by such Partner have been duly taken, and such authorization, execution, delivery, and performance do not conflict with any other agreement or arrangement to which such Partner is a party or by which it is bound, and (d)\u00a0such Partner is acquiring its interest in the Partnership for investment purposes and not with a view to distribution thereof.",
"target": 13
},
{
"text": "This Letter Agreement is binding upon and inures to the benefit of the parties and their respective heirs, executors, administrators, personal representatives, successors, and permitted assigns. This Letter Agreement is personal to you and the availability of you to perform services and the covenants provided by you hereunder have been a material consideration for the Company to enter into this Letter Agreement. Accordingly, you may not assign any of your rights or delegate any of your duties under this Letter Agreement, either voluntarily or by operation of law, without the prior written consent of the Company, which may be given or withheld by the Company in its sole and absolute discretion.",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=19, names=['Assignment', 'Attorney Fees', 'Bankruptcy', 'Change of Control', 'Compliance with Laws', 'Confidentiality', 'Entire Agreement', 'General Definition', 'Governing Law', 'Indemnification', 'Injunctive Relief', 'Jurisdiction and Venue', 'Liens', 'No Warranties', 'Other', 'Permitted Disclosure', 'Survival', 'Term', 'Termination for Convenience'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 119023 |
| valid | 13225 |
| Truthful/autotrain-data-provision_classification | [
"task_categories:text-classification",
"region:us"
] | 2022-08-08T04:26:34+00:00 | {"task_categories": ["text-classification"]} | 2022-08-08T04:29:45+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: provision\_classification
========================================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project provision\_classification.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
17,
27,
17,
23,
27
] | [
"passage: TAGS\n#task_categories-text-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
4d0aa96069f24063697e4df63b95be78d3f7fb7d |
About Dataset
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in Wikipedia.
This is an extract of the data (after cleaning, kernel included) that provides taxonomic, hierarchical categories ("classes") for 342,782 wikipedia articles. There are 3 levels, with 9, 70 and 219 classes respectively.
A version of this dataset is a popular baseline for NLP/text classification tasks. This version of the dataset is much tougher, especially if the L2/L3 levels are used as the targets.
This is an excellent benchmark for hierarchical multiclass/multilabel text classification.
Some example approaches are included as code snippets.
Content
DBPedia dataset with multiple levels of hierarchy/classes, as a multiclass dataset.
Original DBPedia ontology (triplets data): https://wiki.dbpedia.org/develop/datasets
Listing of the class tree/taxonomy: http://mappings.dbpedia.org/server/ontology/classes/
Acknowledgements
Thanks to the Wikimedia foundation for creating Wikipedia, DBPedia and associated open-data goodness!
Thanks to my colleagues at Sparkbeyond (https://www.sparkbeyond.com) for pointing me towards the taxonomical version of this dataset (as opposed to the classic 14 class version)
Inspiration
Try different NLP models.
See also https://www.kaggle.com/datasets/danofer/dbpedia-classes
Compare to the SOTA in Text Classification on DBpedia - https://paperswithcode.com/sota/text-classification-on-dbpedia | DeveloperOats/DBPedia_Classes | [
"task_categories:text-classification",
"task_ids:topic-classification",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-08-08T08:15:05+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "DBpedia", "tags": []} | 2022-08-08T13:54:42+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-topic-classification #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us
|
About Dataset
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in Wikipedia.
This is an extract of the data (after cleaning, kernel included) that provides taxonomic, hierarchical categories ("classes") for 342,782 wikipedia articles. There are 3 levels, with 9, 70 and 219 classes respectively.
A version of this dataset is a popular baseline for NLP/text classification tasks. This version of the dataset is much tougher, especially if the L2/L3 levels are used as the targets.
This is an excellent benchmark for hierarchical multiclass/multilabel text classification.
Some example approaches are included as code snippets.
Content
DBPedia dataset with multiple levels of hierarchy/classes, as a multiclass dataset.
Original DBPedia ontology (triplets data): URL
Listing of the class tree/taxonomy: URL
Acknowledgements
Thanks to the Wikimedia foundation for creating Wikipedia, DBPedia and associated open-data goodness!
Thanks to my colleagues at Sparkbeyond (URL) for pointing me towards the taxonomical version of this dataset (as opposed to the classic 14 class version)
Inspiration
Try different NLP models.
See also URL
Compare to the SOTA in Text Classification on DBpedia - URL | [] | [
"TAGS\n#task_categories-text-classification #task_ids-topic-classification #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us \n"
] | [
59
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-topic-classification #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us \n"
] |
bc91c8c8dbea6a44069e0a955b6ed8dd54fb7fe3 |
About Dataset
Context
This contains data of news headlines published over a period of nineteen years.
Sourced from the reputable Australian news source ABC (Australian Broadcasting Corporation)
Agency Site: (http://www.abc.net.au)
Content
Format: CSV ; Single File
publish_date: Date of publishing for the article in yyyyMMdd format
headline_text: Text of the headline in Ascii , English , lowercase
Start Date: 2003-02-19 ; End Date: 2021-12-31
Inspiration
I look at this news dataset as a summarised historical record of noteworthy events in the globe from early-2003 to end-2021 with a more granular focus on Australia.
This includes the entire corpus of articles published by the abcnews website in the given date range.
With a volume of two hundred articles per day and a good focus on international news, we can be fairly certain that every event of significance has been captured here.
Digging into the keywords, one can see all the important episodes shaping the last decade and how they evolved over time.
Ex: afghanistan war, financial crisis, multiple elections, ecological disasters, terrorism, famous people, criminal activity et cetera.
Similar Work
Similar news datasets exploring other attributes, countries and topics can be seen on my profile.
Most kernals can be reused with minimal changes across these news datasets.
Prepared by Rohit Kulkarni
Taken from https://www.kaggle.com/datasets/therohk/million-headlines | DeveloperOats/Million_News_Headlines | [
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-08-08T08:24:34+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "million news headline", "tags": []} | 2022-08-08T13:56:01+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us
|
About Dataset
Context
This contains data of news headlines published over a period of nineteen years.
Sourced from the reputable Australian news source ABC (Australian Broadcasting Corporation)
Agency Site: (URL)
Content
Format: CSV ; Single File
publish_date: Date of publishing for the article in yyyyMMdd format
headline_text: Text of the headline in Ascii , English , lowercase
Start Date: 2003-02-19 ; End Date: 2021-12-31
Inspiration
I look at this news dataset as a summarised historical record of noteworthy events in the globe from early-2003 to end-2021 with a more granular focus on Australia.
This includes the entire corpus of articles published by the abcnews website in the given date range.
With a volume of two hundred articles per day and a good focus on international news, we can be fairly certain that every event of significance has been captured here.
Digging into the keywords, one can see all the important episodes shaping the last decade and how they evolved over time.
Ex: afghanistan war, financial crisis, multiple elections, ecological disasters, terrorism, famous people, criminal activity et cetera.
Similar Work
Similar news datasets exploring other attributes, countries and topics can be seen on my profile.
Most kernals can be reused with minimal changes across these news datasets.
Prepared by Rohit Kulkarni
Taken from URL | [] | [
"TAGS\n#multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us \n"
] | [
38
] | [
"passage: TAGS\n#multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc0-1.0 #region-us \n"
] |
3ab23dde045f9fe601d8b1f3dadb467de3f05663 |
# Dataset Card for Cerpen Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a small size for Indonesian short story gathered from the internet.
We keep the large size for internal research. if you are interested, please join to [our discord server](https://discord.gg/6v28dq8dRE)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | jakartaresearch/cerpen-corpus | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"cerpen",
"short-story",
"region:us"
] | 2022-08-08T13:05:26+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Small Indonesian Short Story Corpus", "tags": ["cerpen", "short-story"]} | 2022-11-28T04:15:40+00:00 | [] | [
"id"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #cerpen #short-story #region-us
|
# Dataset Card for Cerpen Corpus
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This is a small size for Indonesian short story gathered from the internet.
We keep the large size for internal research. if you are interested, please join to our discord server
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @andreaschandra for adding this dataset. | [
"# Dataset Card for Cerpen Corpus",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis is a small size for Indonesian short story gathered from the internet.\nWe keep the large size for internal research. if you are interested, please join to our discord server",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #cerpen #short-story #region-us \n",
"# Dataset Card for Cerpen Corpus",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThis is a small size for Indonesian short story gathered from the internet.\nWe keep the large size for internal research. if you are interested, please join to our discord server",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] | [
107,
8,
125,
24,
43,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
17
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #size_categories-10K<n<100K #source_datasets-original #language-Indonesian #license-cc-by-4.0 #cerpen #short-story #region-us \n# Dataset Card for Cerpen Corpus## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThis is a small size for Indonesian short story gathered from the internet.\nWe keep the large size for internal research. if you are interested, please join to our discord server### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @andreaschandra for adding this dataset."
] |
903fe28851d02a976db7a3a4bc12b6cfa2f5443c | Dataset used by the paper:
Wu, Ming-Ju, Jyh-Shing R. Jang, and Jui-Long Chen. “Wafer Map Failure Pattern Recognition and Similarity Ranking for Large-Scale Data Sets.” IEEE Transactions on Semiconductor Manufacturing 28, no. 1 (February 2015): 1–12. | lslattery/wafer-defect-detection | [
"region:us"
] | 2022-08-08T14:33:55+00:00 | {} | 2022-08-14T18:53:45+00:00 | [] | [] | TAGS
#region-us
| Dataset used by the paper:
Wu, Ming-Ju, Jyh-Shing R. Jang, and Jui-Long Chen. “Wafer Map Failure Pattern Recognition and Similarity Ranking for Large-Scale Data Sets.” IEEE Transactions on Semiconductor Manufacturing 28, no. 1 (February 2015): 1–12. | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
aa09900373d90780ee70d27571775aff0e51569c | Customer churn prediction dataset of a fictional telecommunication company made by IBM Sample Datasets.
Context
Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs.
Content
Each row represents a customer, each column contains customer’s attributes described on the column metadata.
The data set includes information about:
- Customers who left within the last month: the column is called Churn
- Services that each customer has signed up for: phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
- Customer account information: how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
- Demographic info about customers: gender, age range, and if they have partners and dependents
Credits for the dataset and the card:
- [Kaggle](https://www.kaggle.com/datasets/blastchar/telco-customer-churn)
- [Latest version of the dataset by IBM Samples team](https://community.ibm.com/community/user/businessanalytics/blogs/steven-macko/2019/07/11/telco-customer-churn-1113)
| scikit-learn/churn-prediction | [
"license:cc-by-4.0",
"region:us"
] | 2022-08-08T16:42:17+00:00 | {"license": "cc-by-4.0"} | 2022-08-08T16:56:29+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| Customer churn prediction dataset of a fictional telecommunication company made by IBM Sample Datasets.
Context
Predict behavior to retain customers. You can analyze all relevant customer data and develop focused customer retention programs.
Content
Each row represents a customer, each column contains customer’s attributes described on the column metadata.
The data set includes information about:
- Customers who left within the last month: the column is called Churn
- Services that each customer has signed up for: phone, multiple lines, internet, online security, online backup, device protection, tech support, and streaming TV and movies
- Customer account information: how long they’ve been a customer, contract, payment method, paperless billing, monthly charges, and total charges
- Demographic info about customers: gender, age range, and if they have partners and dependents
Credits for the dataset and the card:
- Kaggle
- Latest version of the dataset by IBM Samples team
| [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] | [
15
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n"
] |
5cde9ecee39de419b1a7c5838e86248a8a51ceef | annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Massive E-commerce Dataset for Retail and Insurance domain.
size_categories:
- n<1K
source_datasets:
- original
tags:
- chatbots
- e-commerce
- retail
- insurance
- consumer
- consumer goods
task_categories:
- question-answering
- text-retrieval
- text2text-generation
- other
- translation
- conversational
task_ids:
- extractive-qa
- closed-domain-qa
- utterance-retrieval
- document-retrieval
- closed-domain-qa
- open-book-qa
- closed-book-qa
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction | asaxena1990/Dummy_dataset | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-08-08T18:23:48+00:00 | {"license": "cc-by-sa-4.0"} | 2022-09-05T00:29:27+00:00 | [] | [] | TAGS
#license-cc-by-sa-4.0 #region-us
| annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Massive E-commerce Dataset for Retail and Insurance domain.
size_categories:
- n<1K
source_datasets:
- original
tags:
- chatbots
- e-commerce
- retail
- insurance
- consumer
- consumer goods
task_categories:
- question-answering
- text-retrieval
- text2text-generation
- other
- translation
- conversational
task_ids:
- extractive-qa
- closed-domain-qa
- utterance-retrieval
- document-retrieval
- closed-domain-qa
- open-book-qa
- closed-book-qa
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction | [] | [
"TAGS\n#license-cc-by-sa-4.0 #region-us \n"
] | [
17
] | [
"passage: TAGS\n#license-cc-by-sa-4.0 #region-us \n"
] |
ae7ffce08599695beb1a5fe3ba6736ec686abdd6 | 119893266 photos from flickr (https://www.flickr.com/creativecommons/by-nc-sa-2.0/)
---
all photos are under license id = 1 name=Attribution-NonCommercial-ShareAlike License url=https://creativecommons.org/licenses/by-nc-sa/2.0/ | Chr0my/public_flickr_photos_license_1 | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2022-08-08T19:27:28+00:00 | {"license": "cc-by-nc-sa-3.0"} | 2022-08-08T19:39:40+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-3.0 #region-us
| 119893266 photos from flickr (URL
---
all photos are under license id = 1 name=Attribution-NonCommercial-ShareAlike License url=URL | [] | [
"TAGS\n#license-cc-by-nc-sa-3.0 #region-us \n"
] | [
19
] | [
"passage: TAGS\n#license-cc-by-nc-sa-3.0 #region-us \n"
] |
490b980249446f2f3bd2df3a8cf085d0f2de240a |
# Dataset Description
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
- ArXiv.math (10GB)
- Open-source math textbooks (50MB)
- Formal mathematics libraries (500MB)
- Lean mathlib and other Lean repositories
- Isabelle AFP
- Coq mathematical components and other Coq repositories
- HOL Light
- set.mm
- Mizar Mathematical Library
- Math Overflow and Math Stack Exchange (2.5GB)
- Wiki-style sources (50MB)
- ProofWiki
- Wikipedia math articles
- MATH dataset (6MB)
The construction of the dataset is reproducible using the code and instructions in the [proof-pile Github
repo](https://github.com/zhangir-azerbayev/proof-pile).
# Supported Tasks
This dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.
# Languages
All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.
# Evaluation
The version of `set.mm` in this dataset has 10% of proofs replaced with the `?` character in order to preserve a validation and test set for Metamath provers pre-trained on the `proof-pile`. The precise split can be found here: [validation](https://github.com/zhangir-azerbayev/mm-extract/blob/main/valid_decls.json) and [test](https://github.com/zhangir-azerbayev/mm-extract/blob/main/test_decls.json).
The Lean mathlib commit used in this dataset is `6313863`. Theorems created in subsequent commits can be used for evaluating Lean theorem provers.
This dataset contains only the training set of the [MATH dataset](https://github.com/hendrycks/math). However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the [NaturalProofs dataset](https://github.com/wellecks/naturalproofs).
# Data Preprocessing
This section describes any significant filtering and transformations made to various subsets of the data.
**arXiv.math.**
The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
when choosing which files from arXiv.math source folders to include in the dataset:
- Keep only files with a `.tex` extension.
- Only include files that use either a `utf-8/16/32` or `latin-1` text encoding.
- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading.
- Delete files that contain the keyword `gnuplot`. Gnuplot-latex is an old command line utility that generates blocks
of entirely unintelligible source.
- Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n",
"\n",
- Exclude files shorter than 280 characters (characters counted after substring removal described below).
In addition, we apply the following transformations to arXiv.math texts:
- Delete everything outside of `\begin{document}` and `\end{document}`.
- Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}`
- Delete comments.
- Any more than three consecutive newlines are replaced by three consecutive newlines.
In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the
proof-pile.
**Stack Exchange.**
We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
```
QUESTION [{num_upvotes} upvotes]: {text of question}
REPLY [{num_upvotes} votes]: {text of reply}
REPLY [{num_upvotes} votes]: {text of reply}
.
.
.
```
**set.mm.**
We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract)
## Contributions
Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski.
We would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support.
| hoskinson-center/proof-pile | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"math",
"mathematics",
"formal-mathematics",
"region:us"
] | 2022-08-08T19:57:56+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "proof-pile", "tags": ["math", "mathematics", "formal-mathematics"]} | 2023-08-19T02:24:11+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #language-English #license-apache-2.0 #math #mathematics #formal-mathematics #region-us
|
# Dataset Description
The 'proof-pile' is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the 'gpt-neox' tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
- URL (10GB)
- Open-source math textbooks (50MB)
- Formal mathematics libraries (500MB)
- Lean mathlib and other Lean repositories
- Isabelle AFP
- Coq mathematical components and other Coq repositories
- HOL Light
- URL
- Mizar Mathematical Library
- Math Overflow and Math Stack Exchange (2.5GB)
- Wiki-style sources (50MB)
- ProofWiki
- Wikipedia math articles
- MATH dataset (6MB)
The construction of the dataset is reproducible using the code and instructions in the proof-pile Github
repo.
# Supported Tasks
This dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the 'proof-pile' will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.
# Languages
All informal mathematics in the 'proof-pile' is written in English and LaTeX (arXiv articles in other languages are filtered out using languagedetect). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.
# Evaluation
The version of 'URL' in this dataset has 10% of proofs replaced with the '?' character in order to preserve a validation and test set for Metamath provers pre-trained on the 'proof-pile'. The precise split can be found here: validation and test.
The Lean mathlib commit used in this dataset is '6313863'. Theorems created in subsequent commits can be used for evaluating Lean theorem provers.
This dataset contains only the training set of the MATH dataset. However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the NaturalProofs dataset.
# Data Preprocessing
This section describes any significant filtering and transformations made to various subsets of the data.
URL.
The URL dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
when choosing which files from URL source folders to include in the dataset:
- Keep only files with a '.tex' extension.
- Only include files that use either a 'utf-8/16/32' or 'latin-1' text encoding.
- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading.
- Delete files that contain the keyword 'gnuplot'. Gnuplot-latex is an old command line utility that generates blocks
of entirely unintelligible source.
- Include only articles in English, as determined by the langdetect library. \n",
"\n",
- Exclude files shorter than 280 characters (characters counted after substring removal described below).
In addition, we apply the following transformations to URL texts:
- Delete everything outside of '\begin{document}' and '\end{document}'.
- Delete everything including or after '\Refs', '\begin{thebibliography}', or '\begin{bibdiv}'
- Delete comments.
- Any more than three consecutive newlines are replaced by three consecutive newlines.
In this notebook, we provide an analysis of the prevalence of noisy documents in the URL subset of the
proof-pile.
Stack Exchange.
We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
URL.
We converted 'URL' into human-readable form by following the instructions in the mm-extract repo
## Contributions
Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski.
We would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support.
| [
"# Dataset Description\nThe 'proof-pile' is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the 'gpt-neox' tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely\n- URL (10GB)\n- Open-source math textbooks (50MB)\n- Formal mathematics libraries (500MB)\n - Lean mathlib and other Lean repositories \n - Isabelle AFP\n - Coq mathematical components and other Coq repositories \n - HOL Light\n - URL\n - Mizar Mathematical Library\n- Math Overflow and Math Stack Exchange (2.5GB)\n- Wiki-style sources (50MB)\n - ProofWiki\n - Wikipedia math articles\n- MATH dataset (6MB)\n\nThe construction of the dataset is reproducible using the code and instructions in the proof-pile Github\nrepo.",
"# Supported Tasks\nThis dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the 'proof-pile' will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.",
"# Languages\nAll informal mathematics in the 'proof-pile' is written in English and LaTeX (arXiv articles in other languages are filtered out using languagedetect). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.",
"# Evaluation\nThe version of 'URL' in this dataset has 10% of proofs replaced with the '?' character in order to preserve a validation and test set for Metamath provers pre-trained on the 'proof-pile'. The precise split can be found here: validation and test. \nThe Lean mathlib commit used in this dataset is '6313863'. Theorems created in subsequent commits can be used for evaluating Lean theorem provers. \n\nThis dataset contains only the training set of the MATH dataset. However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the NaturalProofs dataset.",
"# Data Preprocessing\nThis section describes any significant filtering and transformations made to various subsets of the data. \n\nURL.\nThe URL dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics\nwhen choosing which files from URL source folders to include in the dataset:\n- Keep only files with a '.tex' extension.\n- Only include files that use either a 'utf-8/16/32' or 'latin-1' text encoding. \n- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading. \n- Delete files that contain the keyword 'gnuplot'. Gnuplot-latex is an old command line utility that generates blocks\n of entirely unintelligible source. \n- Include only articles in English, as determined by the langdetect library. \\n\",\n \"\\n\",\n- Exclude files shorter than 280 characters (characters counted after substring removal described below).\n\nIn addition, we apply the following transformations to URL texts: \n\n- Delete everything outside of '\\begin{document}' and '\\end{document}'. \n- Delete everything including or after '\\Refs', '\\begin{thebibliography}', or '\\begin{bibdiv}'\n- Delete comments. \n- Any more than three consecutive newlines are replaced by three consecutive newlines. \nIn this notebook, we provide an analysis of the prevalence of noisy documents in the URL subset of the\nproof-pile. \n\nStack Exchange.\nWe only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows\n\n\nURL.\nWe converted 'URL' into human-readable form by following the instructions in the mm-extract repo",
"## Contributions\nAuthors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski. \n\nWe would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #language-English #license-apache-2.0 #math #mathematics #formal-mathematics #region-us \n",
"# Dataset Description\nThe 'proof-pile' is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the 'gpt-neox' tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely\n- URL (10GB)\n- Open-source math textbooks (50MB)\n- Formal mathematics libraries (500MB)\n - Lean mathlib and other Lean repositories \n - Isabelle AFP\n - Coq mathematical components and other Coq repositories \n - HOL Light\n - URL\n - Mizar Mathematical Library\n- Math Overflow and Math Stack Exchange (2.5GB)\n- Wiki-style sources (50MB)\n - ProofWiki\n - Wikipedia math articles\n- MATH dataset (6MB)\n\nThe construction of the dataset is reproducible using the code and instructions in the proof-pile Github\nrepo.",
"# Supported Tasks\nThis dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the 'proof-pile' will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.",
"# Languages\nAll informal mathematics in the 'proof-pile' is written in English and LaTeX (arXiv articles in other languages are filtered out using languagedetect). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.",
"# Evaluation\nThe version of 'URL' in this dataset has 10% of proofs replaced with the '?' character in order to preserve a validation and test set for Metamath provers pre-trained on the 'proof-pile'. The precise split can be found here: validation and test. \nThe Lean mathlib commit used in this dataset is '6313863'. Theorems created in subsequent commits can be used for evaluating Lean theorem provers. \n\nThis dataset contains only the training set of the MATH dataset. However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the NaturalProofs dataset.",
"# Data Preprocessing\nThis section describes any significant filtering and transformations made to various subsets of the data. \n\nURL.\nThe URL dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics\nwhen choosing which files from URL source folders to include in the dataset:\n- Keep only files with a '.tex' extension.\n- Only include files that use either a 'utf-8/16/32' or 'latin-1' text encoding. \n- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading. \n- Delete files that contain the keyword 'gnuplot'. Gnuplot-latex is an old command line utility that generates blocks\n of entirely unintelligible source. \n- Include only articles in English, as determined by the langdetect library. \\n\",\n \"\\n\",\n- Exclude files shorter than 280 characters (characters counted after substring removal described below).\n\nIn addition, we apply the following transformations to URL texts: \n\n- Delete everything outside of '\\begin{document}' and '\\end{document}'. \n- Delete everything including or after '\\Refs', '\\begin{thebibliography}', or '\\begin{bibdiv}'\n- Delete comments. \n- Any more than three consecutive newlines are replaced by three consecutive newlines. \nIn this notebook, we provide an analysis of the prevalence of noisy documents in the URL subset of the\nproof-pile. \n\nStack Exchange.\nWe only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows\n\n\nURL.\nWe converted 'URL' into human-readable form by following the instructions in the mm-extract repo",
"## Contributions\nAuthors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski. \n\nWe would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support."
] | [
81,
212,
74,
77,
174,
411,
64
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #language-English #license-apache-2.0 #math #mathematics #formal-mathematics #region-us \n# Dataset Description\nThe 'proof-pile' is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the 'gpt-neox' tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely\n- URL (10GB)\n- Open-source math textbooks (50MB)\n- Formal mathematics libraries (500MB)\n - Lean mathlib and other Lean repositories \n - Isabelle AFP\n - Coq mathematical components and other Coq repositories \n - HOL Light\n - URL\n - Mizar Mathematical Library\n- Math Overflow and Math Stack Exchange (2.5GB)\n- Wiki-style sources (50MB)\n - ProofWiki\n - Wikipedia math articles\n- MATH dataset (6MB)\n\nThe construction of the dataset is reproducible using the code and instructions in the proof-pile Github\nrepo.# Supported Tasks\nThis dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the 'proof-pile' will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.# Languages\nAll informal mathematics in the 'proof-pile' is written in English and LaTeX (arXiv articles in other languages are filtered out using languagedetect). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar."
] |
c219bc69317de924709cd566a027284ffc79953f |
**Top Flutter Packages Dataset**
Flutter is an open source framework by Google for building beautiful, natively compiled, multi-platform applications from a single codebase. It is gaining quite a bit of popularity because of ability to code in a single language and have it running on Android/iOS and web as well.
This dataset contains a snapshot of Top 5000+ flutter/dart packages hosted on [Flutter package repository](https://pub.dev/)
The dataset was scraped in `July-2022`.
We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of the rapidly evolving flutter ecosystem.
#### Mantainers:
- [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR)
- [Keshaw Soni](https://twitter.com/SoniKeshaw)
- [Somya Gautam](http://linkedin.in/in/somya-gautam)
| deepklarity/top-flutter-packages | [
"license:cc",
"region:us"
] | 2022-08-09T07:51:49+00:00 | {"license": "cc"} | 2022-08-09T08:05:39+00:00 | [] | [] | TAGS
#license-cc #region-us
|
Top Flutter Packages Dataset
Flutter is an open source framework by Google for building beautiful, natively compiled, multi-platform applications from a single codebase. It is gaining quite a bit of popularity because of ability to code in a single language and have it running on Android/iOS and web as well.
This dataset contains a snapshot of Top 5000+ flutter/dart packages hosted on Flutter package repository
The dataset was scraped in 'July-2022'.
We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of the rapidly evolving flutter ecosystem.
#### Mantainers:
- Kondrolla Dinesh Reddy
- Keshaw Soni
- Somya Gautam
| [
"#### Mantainers:\n- Kondrolla Dinesh Reddy\n- Keshaw Soni\n- Somya Gautam"
] | [
"TAGS\n#license-cc #region-us \n",
"#### Mantainers:\n- Kondrolla Dinesh Reddy\n- Keshaw Soni\n- Somya Gautam"
] | [
11,
24
] | [
"passage: TAGS\n#license-cc #region-us \n#### Mantainers:\n- Kondrolla Dinesh Reddy\n- Keshaw Soni\n- Somya Gautam"
] |
eb6de1b8c90f77ec0a8cadc297268308367de753 |
**Top NPM Packages Dataset**
This dataset contains a snapshot of Top 3000 popular node packages hosted on [Node Package Manager](https://www.npmjs.com/)
The dataset was scraped in `July-2022`. This includes a combination of data gathered by [Libraries.io](https://libraries.io/) and [npm](https://www.npmjs.com/)
We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of nodejs ecosystem.
#### Mantainers:
- [Keshaw Soni](https://twitter.com/SoniKeshaw)
- [Somya Gautam](http://linkedin.in/in/somya-gautam)
- [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR)
| deepklarity/top-npm-packages | [
"license:cc",
"region:us"
] | 2022-08-09T08:06:53+00:00 | {"license": "cc"} | 2022-08-09T08:13:13+00:00 | [] | [] | TAGS
#license-cc #region-us
|
Top NPM Packages Dataset
This dataset contains a snapshot of Top 3000 popular node packages hosted on Node Package Manager
The dataset was scraped in 'July-2022'. This includes a combination of data gathered by URL and npm
We aim to use this dataset to perform analysis and identify trends and get a bird's eye view of nodejs ecosystem.
#### Mantainers:
- Keshaw Soni
- Somya Gautam
- Kondrolla Dinesh Reddy
| [
"#### Mantainers:\n- Keshaw Soni\n- Somya Gautam\n- Kondrolla Dinesh Reddy"
] | [
"TAGS\n#license-cc #region-us \n",
"#### Mantainers:\n- Keshaw Soni\n- Somya Gautam\n- Kondrolla Dinesh Reddy"
] | [
11,
24
] | [
"passage: TAGS\n#license-cc #region-us \n#### Mantainers:\n- Keshaw Soni\n- Somya Gautam\n- Kondrolla Dinesh Reddy"
] |
1555264e93350b2cb253e4dd2ca7596b030cc143 |
**Indian Premier League Dataset**

This dataset contains info on all of the [IPL(Indian Premier League)](https://www.iplt20.com/) cricket matches.
Ball-by-Ball level info and scorecard info to be added soon.
The dataset was scraped in `July-2022`.
#### Mantainers:
- [Somya Gautam](http://linkedin.in/in/somya-gautam)
- [Kondrolla Dinesh Reddy](https://twitter.com/KondrollaR)
- [Keshaw Soni](https://twitter.com/SoniKeshaw)
| deepklarity/indian-premier-league | [
"license:cc",
"region:us"
] | 2022-08-09T08:40:42+00:00 | {"license": "cc"} | 2022-08-09T08:47:29+00:00 | [] | [] | TAGS
#license-cc #region-us
|
Indian Premier League Dataset
!z
This dataset contains info on all of the IPL(Indian Premier League) cricket matches.
Ball-by-Ball level info and scorecard info to be added soon.
The dataset was scraped in 'July-2022'.
#### Mantainers:
- Somya Gautam
- Kondrolla Dinesh Reddy
- Keshaw Soni
| [
"#### Mantainers:\n- Somya Gautam\n- Kondrolla Dinesh Reddy\n- Keshaw Soni"
] | [
"TAGS\n#license-cc #region-us \n",
"#### Mantainers:\n- Somya Gautam\n- Kondrolla Dinesh Reddy\n- Keshaw Soni"
] | [
11,
24
] | [
"passage: TAGS\n#license-cc #region-us \n#### Mantainers:\n- Somya Gautam\n- Kondrolla Dinesh Reddy\n- Keshaw Soni"
] |
c310f2a990aa87b7119122cbbf6b4664c8c5b5b7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_scitldr
* Dataset: Blaise-g/scitldr
* Config: Blaise-g--scitldr
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__scitldr-89735e41-12705693 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-09T14:34:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/scitldr"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_scitldr", "metrics": ["bertscore"], "dataset_name": "Blaise-g/scitldr", "dataset_config": "Blaise-g--scitldr", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-08-09T15:51:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_scitldr
* Dataset: Blaise-g/scitldr
* Config: Blaise-g--scitldr
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: Blaise-g/scitldr\n* Config: Blaise-g--scitldr\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: Blaise-g/scitldr\n* Config: Blaise-g--scitldr\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
110,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: Blaise-g/scitldr\n* Config: Blaise-g--scitldr\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
506292df69a01a71aa75ff0fcdd162eba2120920 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr
* Dataset: Blaise-g/scitldr
* Config: Blaise-g--scitldr
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__scitldr-89735e41-12705694 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-09T14:34:10+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/scitldr"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr", "metrics": ["bertscore"], "dataset_name": "Blaise-g/scitldr", "dataset_config": "Blaise-g--scitldr", "dataset_split": "test", "col_mapping": {"text": "source", "target": "target"}}} | 2022-08-09T15:02:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr
* Dataset: Blaise-g/scitldr
* Config: Blaise-g--scitldr
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: Blaise-g/scitldr\n* Config: Blaise-g--scitldr\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: Blaise-g/scitldr\n* Config: Blaise-g--scitldr\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
13,
117,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: Blaise-g/scitldr\n* Config: Blaise-g--scitldr\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
f48e06e7e27a3e222fe5923a930ccdf2d3fd9eee | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-12-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@grappler](https://huggingface.co/grappler) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-1bafd1c4-12715695 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-09T14:47:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-12-6", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-09T15:13:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-12-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @grappler for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-xsum-12-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @grappler for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-xsum-12-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @grappler for evaluating this model."
] | [
13,
92,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-xsum-12-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @grappler for evaluating this model."
] |
1cc485463d5c2c6c6e3ef239239eb9857e6bebb2 |
# SynTran-fa
Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:
```python
import datasets
data = datasets.load_dataset('SLPL/syntran-fa', split="train")
```
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Sharif-SLPL](https://github.com/Sharif-SLPL)
- **Repository:** [SynTran-fa](https://github.com/agp-internship/syntran-fa)
- **Point of Contact:** [Sadra Sabouri](mailto:[email protected])
### Dataset Summary
Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in [Source Data section](#source-data).
The main idea for this dataset comes from [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf) where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used [stanza](https://stanfordnlp.github.io/stanza/) as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
### Supported Tasks and Leaderboards
This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by [Fluent Response Generation for Conversational Question Answering](https://aclanthology.org/2020.acl-main.19.pdf).
### Languages
+ Persian (fa)
## Dataset Structure
Each row of the dataset will look like something like the below:
```json
{
'id': 0,
'question': 'باشگاه هاکی ساوتهمپتون چه نام دارد؟',
'short_answer': 'باشگاه هاکی ساوتهمپتون',
'fluent_answer': 'باشگاه هاکی ساوتهمپتون باشگاه هاکی ساوتهمپتون نام دارد.',
'bert_loss': 1.110097069682014
}
```
+ `id` : the entry id in dataset
+ `question` : the question
+ `short_answer` : the short answer corresponding to the `question` (the primary answer)
+ `fluent_answer` : fluent (long) answer generated from both `question` and the `short_answer` (the secondary answer)
+ `bert_loss` : the loss that [pars-bert](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) gives when inputting the `fluent_answer` to it. As it increases the sentence is more likely to be influent.
Note: the dataset is sorted increasingly by the `bert_loss`, so first sentences are more likely to be fluent.
### Data Splits
Currently, the dataset just provided the `train` split. There would be a `test` split soon.
## Dataset Creation
### Source Data
The source datasets that we used are as follows:
+ [PersianQA](https://github.com/sajjjadayobi/PersianQA)
+ [PersianQuAD](https://ieeexplore.ieee.org/document/9729745)
#### Initial Data Collection and Normalization
We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
## Additional Information
### Dataset Curators
The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.
### Licensing Information
MIT
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@farhaaaaa](https://github.com/farhaaaaa) and [@sadrasabouri](https://github.com/sadrasabouri) for adding this dataset. | SLPL/syntran-fa | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:30k<n<50k",
"language:fa",
"license:mit",
"conditional-text-generation",
"conversational-question-answering",
"region:us"
] | 2022-08-10T03:13:13+00:00 | {"language": ["fa"], "license": "mit", "multilinguality": ["monolingual"], "size_categories": ["30k<n<50k"], "task_categories": ["question-answering", "text2text-generation", "text-generation"], "task_ids": [], "pretty_name": "SynTranFa", "tags": ["conditional-text-generation", "conversational-question-answering"]} | 2022-11-03T06:34:17+00:00 | [] | [
"fa"
] | TAGS
#task_categories-question-answering #task_categories-text2text-generation #task_categories-text-generation #multilinguality-monolingual #size_categories-30k<n<50k #language-Persian #license-mit #conditional-text-generation #conversational-question-answering #region-us
|
# SynTran-fa
Syntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Dataset Creation
- Source Data
- Considerations for Using the Data
- Social Impact of Dataset
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Sharif-SLPL
- Repository: SynTran-fa
- Point of Contact: Sadra Sabouri
### Dataset Summary
Generating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).
This dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in Source Data section.
The main idea for this dataset comes from Fluent Response Generation for Conversational Question Answering where they used a "parser + syntactic rules" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used stanza as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).
### Supported Tasks and Leaderboards
This dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by Fluent Response Generation for Conversational Question Answering.
### Languages
+ Persian (fa)
## Dataset Structure
Each row of the dataset will look like something like the below:
+ 'id' : the entry id in dataset
+ 'question' : the question
+ 'short_answer' : the short answer corresponding to the 'question' (the primary answer)
+ 'fluent_answer' : fluent (long) answer generated from both 'question' and the 'short_answer' (the secondary answer)
+ 'bert_loss' : the loss that pars-bert gives when inputting the 'fluent_answer' to it. As it increases the sentence is more likely to be influent.
Note: the dataset is sorted increasingly by the 'bert_loss', so first sentences are more likely to be fluent.
### Data Splits
Currently, the dataset just provided the 'train' split. There would be a 'test' split soon.
## Dataset Creation
### Source Data
The source datasets that we used are as follows:
+ PersianQA
+ PersianQuAD
#### Initial Data Collection and Normalization
We extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.
## Additional Information
### Dataset Curators
The dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.
### Licensing Information
MIT
### Contributions
Thanks to @farhaaaaa and @sadrasabouri for adding this dataset. | [
"# SynTran-fa \nSyntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n- Dataset Creation\n - Source Data\n- Considerations for Using the Data\n - Social Impact of Dataset\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n \n- Homepage: Sharif-SLPL\n- Repository: SynTran-fa\n- Point of Contact: Sadra Sabouri",
"### Dataset Summary\n\nGenerating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).\n\nThis dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in Source Data section.\n\nThe main idea for this dataset comes from Fluent Response Generation for Conversational Question Answering where they used a \"parser + syntactic rules\" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used stanza as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).",
"### Supported Tasks and Leaderboards\n\nThis dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by Fluent Response Generation for Conversational Question Answering.",
"### Languages\n\n+ Persian (fa)",
"## Dataset Structure\nEach row of the dataset will look like something like the below:\n\n+ 'id' : the entry id in dataset\n+ 'question' : the question\n+ 'short_answer' : the short answer corresponding to the 'question' (the primary answer)\n+ 'fluent_answer' : fluent (long) answer generated from both 'question' and the 'short_answer' (the secondary answer)\n+ 'bert_loss' : the loss that pars-bert gives when inputting the 'fluent_answer' to it. As it increases the sentence is more likely to be influent.\n\nNote: the dataset is sorted increasingly by the 'bert_loss', so first sentences are more likely to be fluent.",
"### Data Splits\n\nCurrently, the dataset just provided the 'train' split. There would be a 'test' split soon.",
"## Dataset Creation",
"### Source Data\nThe source datasets that we used are as follows:\n\n+ PersianQA\n+ PersianQuAD",
"#### Initial Data Collection and Normalization\n\nWe extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.",
"### Licensing Information\n\nMIT",
"### Contributions\n\nThanks to @farhaaaaa and @sadrasabouri for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_categories-text-generation #multilinguality-monolingual #size_categories-30k<n<50k #language-Persian #license-mit #conditional-text-generation #conversational-question-answering #region-us \n",
"# SynTran-fa \nSyntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n- Dataset Creation\n - Source Data\n- Considerations for Using the Data\n - Social Impact of Dataset\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n \n- Homepage: Sharif-SLPL\n- Repository: SynTran-fa\n- Point of Contact: Sadra Sabouri",
"### Dataset Summary\n\nGenerating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).\n\nThis dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in Source Data section.\n\nThe main idea for this dataset comes from Fluent Response Generation for Conversational Question Answering where they used a \"parser + syntactic rules\" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used stanza as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).",
"### Supported Tasks and Leaderboards\n\nThis dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by Fluent Response Generation for Conversational Question Answering.",
"### Languages\n\n+ Persian (fa)",
"## Dataset Structure\nEach row of the dataset will look like something like the below:\n\n+ 'id' : the entry id in dataset\n+ 'question' : the question\n+ 'short_answer' : the short answer corresponding to the 'question' (the primary answer)\n+ 'fluent_answer' : fluent (long) answer generated from both 'question' and the 'short_answer' (the secondary answer)\n+ 'bert_loss' : the loss that pars-bert gives when inputting the 'fluent_answer' to it. As it increases the sentence is more likely to be influent.\n\nNote: the dataset is sorted increasingly by the 'bert_loss', so first sentences are more likely to be fluent.",
"### Data Splits\n\nCurrently, the dataset just provided the 'train' split. There would be a 'test' split soon.",
"## Dataset Creation",
"### Source Data\nThe source datasets that we used are as follows:\n\n+ PersianQA\n+ PersianQuAD",
"#### Initial Data Collection and Normalization\n\nWe extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.",
"## Additional Information",
"### Dataset Curators\n\nThe dataset is gathered together completely in the Asr Gooyesh Pardaz company's summer internship under the supervision of Soroush Gooran, Prof. Hossein Sameti, and the mentorship of Sadra Sabouri. This project was Farhan Farsi's first internship project.",
"### Licensing Information\n\nMIT",
"### Contributions\n\nThanks to @farhaaaaa and @sadrasabouri for adding this dataset."
] | [
90,
45,
82,
31,
305,
72,
10,
174,
30,
5,
26,
61,
5,
5,
9,
56,
5,
74,
7,
23
] | [
"passage: TAGS\n#task_categories-question-answering #task_categories-text2text-generation #task_categories-text-generation #multilinguality-monolingual #size_categories-30k<n<50k #language-Persian #license-mit #conditional-text-generation #conversational-question-answering #region-us \n# SynTran-fa \nSyntactic Transformed Version of Farsi QA datasets to make fluent responses from questions and short answers. You can use this dataset by the code below:## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n- Dataset Creation\n - Source Data\n- Considerations for Using the Data\n - Social Impact of Dataset\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n \n- Homepage: Sharif-SLPL\n- Repository: SynTran-fa\n- Point of Contact: Sadra Sabouri",
"passage: ### Dataset Summary\n\nGenerating fluent responses has always been challenging for the question-answering task, especially in low-resource languages like Farsi. In recent years there were some efforts for enhancing the size of datasets in Farsi. Syntran-fa is a question-answering dataset that accumulates the former Farsi QA dataset's short answers and proposes a complete fluent answer for each pair of (question, short_answer).\n\nThis dataset contains nearly 50,000 indices of questions and answers. The dataset that has been used as our sources are in Source Data section.\n\nThe main idea for this dataset comes from Fluent Response Generation for Conversational Question Answering where they used a \"parser + syntactic rules\" module to make different fluent answers from a pair of question and a short answer using a parser and some syntactic rules. In this project, we used stanza as our parser to parse the question and generate a response according to it using the short (sentences without verbs - up to ~4 words) answers. One can continue this project by generating different permutations of the sentence's parts (and thus providing more than one sentence for an answer) or training a seq2seq model which does what we do with our rule-based system (by defining a new text-to-text task).### Supported Tasks and Leaderboards\n\nThis dataset can be used for the question-answering task, especially when you are going to generate fluent responses. You can train a seq2seq model with this dataset to generate fluent responses - as done by Fluent Response Generation for Conversational Question Answering.### Languages\n\n+ Persian (fa)## Dataset Structure\nEach row of the dataset will look like something like the below:\n\n+ 'id' : the entry id in dataset\n+ 'question' : the question\n+ 'short_answer' : the short answer corresponding to the 'question' (the primary answer)\n+ 'fluent_answer' : fluent (long) answer generated from both 'question' and the 'short_answer' (the secondary answer)\n+ 'bert_loss' : the loss that pars-bert gives when inputting the 'fluent_answer' to it. As it increases the sentence is more likely to be influent.\n\nNote: the dataset is sorted increasingly by the 'bert_loss', so first sentences are more likely to be fluent.### Data Splits\n\nCurrently, the dataset just provided the 'train' split. There would be a 'test' split soon.## Dataset Creation### Source Data\nThe source datasets that we used are as follows:\n\n+ PersianQA\n+ PersianQuAD#### Initial Data Collection and Normalization\n\nWe extract all short answer (sentences without verbs - up to ~4 words) entries of all open source QA datasets in Farsi and used some rules featuring the question parse tree to make long (fluent) answers.### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\nThe dataset is completely a subset of open source known datasets so all information in it is already there on the internet as a open-source dataset. By the way, we do not take responsibility for any of that.## Additional Information"
] |
8bea8b3d7a39664cd7827474342599b6ab016991 |
# Dataset Card for LUDWIG
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/ucl-dark/ludwig**
- **Paper: TODO**
- **Leaderboard: TODO**
- **Point of Contact: Laura Ruis**
### Dataset Summary
LUDWIG (**L**anguage **U**nderstanding **W**ith **I**mplied meanin**G**) is a dataset containing English conversational implicatures.
Implicature is the act of meaning or implying one thing by saying something else.
There's different types of implicatures, from simple ones like "Some guests came to the party"
(implying not all guests came) to more complicated implicatures that depend on context like
"A: Are you going to the party this Friday? B: There's a global pandemic.", implying no. Implicatures serve a wide range of
goals in communication: efficiency, style, navigating social interactions, and more. We cannot fully
understand utterances without understanding their implications.
The implicatures in this dataset are conversational because they come in utterance-response tuples.
Each tuple has an implicature associated with it,
which is the implied meaning of the response. For example:
Utterance: Are you going to the party this Friday?
Response: There's a global pandemic.
Implicature: No.
This dataset can be used to evaluate language models on their pragmatic language understanding.
### Supported Tasks and Leaderboards
- ```text-generation```: The dataset can be used to evaluate a models ability to generate the correct next token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means" the correct completion would be "no". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes").
- ```fill-mask```: The dataset can be used to evaluate a models ability to fill the correct token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]" the correct mask-fill would be "no". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes").
### Languages
English
## Dataset Structure
### Data Instances
Find below an example of a 1-shot example instance (1-shot because there's 1 prompt example).
```
{
"id": 1,
"utterance": "Are you going to the party this Friday?",
"response": "There's a global pandemic.",
"implicature": "No.",
"incoherent_implicature": "Yes".
"prompts": [
{
"utterance": "Was that hot?",
"response": "The sun was scorching.",
"implicature": "Yes.",
"incoherent_implicature": "No.".
}
]
}
```
### Data Fields
```
{
"id": int, # unique identifier of data points
"utterance": str, # the utterance in this example
"response": str, # the response in this example
"implicature": str, # the implied meaning of the response, e.g. 'yes'
"incoherent_implicature": str, # the wrong implied meaning, e.g. 'no'
"prompts": [ # optional: prompt examples from the validation set
{
"utterance": str,
"response": str,
"implicature": str,
"incoherent_implicature": str,
}
]
}
```
### Data Splits
**Validation**: 118 instances that can be used for finetuning or few-shot learning
**Test**: 600 instances that can be used for evaluating models.
NB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added
by @LauraRuis.
## Dataset Creation
### Curation Rationale
Pragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field.
We want computational models of language to understand all the speakers implications.
### Source Data
#### Initial Data Collection and Normalization
"Conversational implicatures in English dialogue: Annotated dataset", Elizabeth Jasmi George and Radhika Mamidi 2020.
[Link to paper](https://doi.org/10.1016/j.procs.2020.04.251)
#### Who are the source language producers?
These written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually.
### Annotations
#### Annotation process
Manually annotated by dataset collectors.
#### Who are the annotators?
Authors of the original paper.
### Personal and Sensitive Information
All the data is public and not sensitive.
## Considerations for Using the Data
### Social Impact of Dataset
Any application that requires communicating with humans requires pragmatic language understanding.
### Discussion of Biases
Implicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate "yes") might not be common knowledge for everyone.
Implicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language.
### Other Known Limitations
None yet.
## Additional Information
### Dataset Curators
Elizabeth Jasmi George and Radhika Mamidi
### Licensing Information
[license](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{George:Mamidi:2020,
author = {George, Elizabeth Jasmi and Mamidi, Radhika},
doi = {10.1016/j.procs.2020.04.251},
journal = {Procedia Computer Science},
keywords = {},
note = {https://doi.org/10.1016/j.procs.2020.04.251},
number = {},
pages = {2316-2323},
title = {Conversational implicatures in English dialogue: Annotated dataset},
url = {https://app.dimensions.ai/details/publication/pub.1128198497},
volume = {171},
year = {2020}
}
```
### Contributions
Thanks to [@LauraRuis](https://github.com/LauraRuis) for adding this dataset. | UCL-DARK/ludwig | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"implicature",
"pragmatics",
"language",
"llm",
"conversation",
"dialogue",
"region:us"
] | 2022-08-10T06:56:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "ludwig", "tags": ["implicature", "pragmatics", "language", "llm", "conversation", "dialogue"]} | 2022-08-11T14:51:56+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-4.0 #implicature #pragmatics #language #llm #conversation #dialogue #region-us
|
# Dataset Card for LUDWIG
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository: URL
- Paper: TODO
- Leaderboard: TODO
- Point of Contact: Laura Ruis
### Dataset Summary
LUDWIG (Language Understanding With Implied meaninG) is a dataset containing English conversational implicatures.
Implicature is the act of meaning or implying one thing by saying something else.
There's different types of implicatures, from simple ones like "Some guests came to the party"
(implying not all guests came) to more complicated implicatures that depend on context like
"A: Are you going to the party this Friday? B: There's a global pandemic.", implying no. Implicatures serve a wide range of
goals in communication: efficiency, style, navigating social interactions, and more. We cannot fully
understand utterances without understanding their implications.
The implicatures in this dataset are conversational because they come in utterance-response tuples.
Each tuple has an implicature associated with it,
which is the implied meaning of the response. For example:
Utterance: Are you going to the party this Friday?
Response: There's a global pandemic.
Implicature: No.
This dataset can be used to evaluate language models on their pragmatic language understanding.
### Supported Tasks and Leaderboards
- : The dataset can be used to evaluate a models ability to generate the correct next token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means" the correct completion would be "no". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes").
- : The dataset can be used to evaluate a models ability to fill the correct token, i.e. "yes" or "no", depending on the implicature. For example, if you pass the model an example wrapped in a template like "Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]" the correct mask-fill would be "no". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p("no") > p("yes").
### Languages
English
## Dataset Structure
### Data Instances
Find below an example of a 1-shot example instance (1-shot because there's 1 prompt example).
### Data Fields
### Data Splits
Validation: 118 instances that can be used for finetuning or few-shot learning
Test: 600 instances that can be used for evaluating models.
NB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added
by @LauraRuis.
## Dataset Creation
### Curation Rationale
Pragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field.
We want computational models of language to understand all the speakers implications.
### Source Data
#### Initial Data Collection and Normalization
"Conversational implicatures in English dialogue: Annotated dataset", Elizabeth Jasmi George and Radhika Mamidi 2020.
Link to paper
#### Who are the source language producers?
These written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually.
### Annotations
#### Annotation process
Manually annotated by dataset collectors.
#### Who are the annotators?
Authors of the original paper.
### Personal and Sensitive Information
All the data is public and not sensitive.
## Considerations for Using the Data
### Social Impact of Dataset
Any application that requires communicating with humans requires pragmatic language understanding.
### Discussion of Biases
Implicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate "yes") might not be common knowledge for everyone.
Implicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language.
### Other Known Limitations
None yet.
## Additional Information
### Dataset Curators
Elizabeth Jasmi George and Radhika Mamidi
### Licensing Information
license
### Contributions
Thanks to @LauraRuis for adding this dataset. | [
"# Dataset Card for LUDWIG",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper: TODO\n- Leaderboard: TODO\n- Point of Contact: Laura Ruis",
"### Dataset Summary\n\nLUDWIG (Language Understanding With Implied meaninG) is a dataset containing English conversational implicatures.\nImplicature is the act of meaning or implying one thing by saying something else. \nThere's different types of implicatures, from simple ones like \"Some guests came to the party\" \n(implying not all guests came) to more complicated implicatures that depend on context like \n\"A: Are you going to the party this Friday? B: There's a global pandemic.\", implying no. Implicatures serve a wide range of\ngoals in communication: efficiency, style, navigating social interactions, and more. We cannot fully\nunderstand utterances without understanding their implications. \nThe implicatures in this dataset are conversational because they come in utterance-response tuples. \nEach tuple has an implicature associated with it,\nwhich is the implied meaning of the response. For example:\n\nUtterance: Are you going to the party this Friday? \nResponse: There's a global pandemic. \nImplicature: No. \n\nThis dataset can be used to evaluate language models on their pragmatic language understanding.",
"### Supported Tasks and Leaderboards\n\n- : The dataset can be used to evaluate a models ability to generate the correct next token, i.e. \"yes\" or \"no\", depending on the implicature. For example, if you pass the model an example wrapped in a template like \"Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means\" the correct completion would be \"no\". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p(\"no\") > p(\"yes\").\n- : The dataset can be used to evaluate a models ability to fill the correct token, i.e. \"yes\" or \"no\", depending on the implicature. For example, if you pass the model an example wrapped in a template like \"Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]\" the correct mask-fill would be \"no\". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p(\"no\") > p(\"yes\").",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nFind below an example of a 1-shot example instance (1-shot because there's 1 prompt example).",
"### Data Fields",
"### Data Splits\n\nValidation: 118 instances that can be used for finetuning or few-shot learning \nTest: 600 instances that can be used for evaluating models.\n\nNB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added\nby @LauraRuis.",
"## Dataset Creation",
"### Curation Rationale\n\nPragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field.\nWe want computational models of language to understand all the speakers implications.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\"Conversational implicatures in English dialogue: Annotated dataset\", Elizabeth Jasmi George and Radhika Mamidi 2020.\n\nLink to paper",
"#### Who are the source language producers?\n\nThese written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually.",
"### Annotations",
"#### Annotation process\n\nManually annotated by dataset collectors.",
"#### Who are the annotators?\n\nAuthors of the original paper.",
"### Personal and Sensitive Information\n\nAll the data is public and not sensitive.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nAny application that requires communicating with humans requires pragmatic language understanding.",
"### Discussion of Biases\n\nImplicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate \"yes\") might not be common knowledge for everyone.\nImplicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language.",
"### Other Known Limitations\n\nNone yet.",
"## Additional Information",
"### Dataset Curators\n\nElizabeth Jasmi George and Radhika Mamidi",
"### Licensing Information\n\nlicense",
"### Contributions\n\nThanks to @LauraRuis for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-4.0 #implicature #pragmatics #language #llm #conversation #dialogue #region-us \n",
"# Dataset Card for LUDWIG",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper: TODO\n- Leaderboard: TODO\n- Point of Contact: Laura Ruis",
"### Dataset Summary\n\nLUDWIG (Language Understanding With Implied meaninG) is a dataset containing English conversational implicatures.\nImplicature is the act of meaning or implying one thing by saying something else. \nThere's different types of implicatures, from simple ones like \"Some guests came to the party\" \n(implying not all guests came) to more complicated implicatures that depend on context like \n\"A: Are you going to the party this Friday? B: There's a global pandemic.\", implying no. Implicatures serve a wide range of\ngoals in communication: efficiency, style, navigating social interactions, and more. We cannot fully\nunderstand utterances without understanding their implications. \nThe implicatures in this dataset are conversational because they come in utterance-response tuples. \nEach tuple has an implicature associated with it,\nwhich is the implied meaning of the response. For example:\n\nUtterance: Are you going to the party this Friday? \nResponse: There's a global pandemic. \nImplicature: No. \n\nThis dataset can be used to evaluate language models on their pragmatic language understanding.",
"### Supported Tasks and Leaderboards\n\n- : The dataset can be used to evaluate a models ability to generate the correct next token, i.e. \"yes\" or \"no\", depending on the implicature. For example, if you pass the model an example wrapped in a template like \"Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means\" the correct completion would be \"no\". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p(\"no\") > p(\"yes\").\n- : The dataset can be used to evaluate a models ability to fill the correct token, i.e. \"yes\" or \"no\", depending on the implicature. For example, if you pass the model an example wrapped in a template like \"Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]\" the correct mask-fill would be \"no\". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p(\"no\") > p(\"yes\").",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nFind below an example of a 1-shot example instance (1-shot because there's 1 prompt example).",
"### Data Fields",
"### Data Splits\n\nValidation: 118 instances that can be used for finetuning or few-shot learning \nTest: 600 instances that can be used for evaluating models.\n\nNB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added\nby @LauraRuis.",
"## Dataset Creation",
"### Curation Rationale\n\nPragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field.\nWe want computational models of language to understand all the speakers implications.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\"Conversational implicatures in English dialogue: Annotated dataset\", Elizabeth Jasmi George and Radhika Mamidi 2020.\n\nLink to paper",
"#### Who are the source language producers?\n\nThese written representations of the utterances are collected manually by scraping and transcribing from relevant sources from August, 2019 to August, 2020. The source of dialogues in the data include TOEFL listening comprehension short conversations, movie dialogues from IMSDb and websites explaining idioms, similes, metaphors and hyperboles. The implicatures are annotated manually.",
"### Annotations",
"#### Annotation process\n\nManually annotated by dataset collectors.",
"#### Who are the annotators?\n\nAuthors of the original paper.",
"### Personal and Sensitive Information\n\nAll the data is public and not sensitive.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nAny application that requires communicating with humans requires pragmatic language understanding.",
"### Discussion of Biases\n\nImplicatures can be biased to specific cultures. For example, whether the Pope is Catholic (a common used response implicature to indicate \"yes\") might not be common knowledge for everyone.\nImplicatures are also language-specific, the way people use pragmatic language depends on the language. This dataset only focuses on the English language.",
"### Other Known Limitations\n\nNone yet.",
"## Additional Information",
"### Dataset Curators\n\nElizabeth Jasmi George and Radhika Mamidi",
"### Licensing Information\n\nlicense",
"### Contributions\n\nThanks to @LauraRuis for adding this dataset."
] | [
136,
9,
125,
32,
255,
312,
5,
6,
27,
5,
81,
5,
48,
4,
40,
97,
5,
16,
16,
17,
8,
22,
82,
11,
5,
16,
7,
18
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-4.0 #implicature #pragmatics #language #llm #conversation #dialogue #region-us \n# Dataset Card for LUDWIG## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper: TODO\n- Leaderboard: TODO\n- Point of Contact: Laura Ruis",
"passage: ### Dataset Summary\n\nLUDWIG (Language Understanding With Implied meaninG) is a dataset containing English conversational implicatures.\nImplicature is the act of meaning or implying one thing by saying something else. \nThere's different types of implicatures, from simple ones like \"Some guests came to the party\" \n(implying not all guests came) to more complicated implicatures that depend on context like \n\"A: Are you going to the party this Friday? B: There's a global pandemic.\", implying no. Implicatures serve a wide range of\ngoals in communication: efficiency, style, navigating social interactions, and more. We cannot fully\nunderstand utterances without understanding their implications. \nThe implicatures in this dataset are conversational because they come in utterance-response tuples. \nEach tuple has an implicature associated with it,\nwhich is the implied meaning of the response. For example:\n\nUtterance: Are you going to the party this Friday? \nResponse: There's a global pandemic. \nImplicature: No. \n\nThis dataset can be used to evaluate language models on their pragmatic language understanding.### Supported Tasks and Leaderboards\n\n- : The dataset can be used to evaluate a models ability to generate the correct next token, i.e. \"yes\" or \"no\", depending on the implicature. For example, if you pass the model an example wrapped in a template like \"Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means\" the correct completion would be \"no\". Success in this task can be determined by the ability to generate the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p(\"no\") > p(\"yes\").\n- : The dataset can be used to evaluate a models ability to fill the correct token, i.e. \"yes\" or \"no\", depending on the implicature. For example, if you pass the model an example wrapped in a template like \"Esther asked 'Are you coming to the party this Friday' and Juan responded 'There's a global pandemic', which means [mask]\" the correct mask-fill would be \"no\". Success in this task can be determined by the ability to fill the correct answer or by the ability to give the right token a higher likelihood than the wrong token, e.g. p(\"no\") > p(\"yes\").### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nFind below an example of a 1-shot example instance (1-shot because there's 1 prompt example).### Data Fields### Data Splits\n\nValidation: 118 instances that can be used for finetuning or few-shot learning \nTest: 600 instances that can be used for evaluating models.\n\nNB: the splits weren't originally part of the paper that presents this dataset. The same goes for the k-shot prompts. Added\nby @LauraRuis.## Dataset Creation### Curation Rationale\n\nPragmatic language understanding is a crucial aspect of human communication, and implicatures are the primary object of study in this field.\nWe want computational models of language to understand all the speakers implications.### Source Data"
] |
7f5939deef9875465d3ff70ab0102ef957f4f352 | # Dataset Card for MFRC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Reddit posts annotated for moral foundations
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- text
- subreddit
- bucket
- annotator
- annotation
- confidence
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
cc-by-4.0
### Citation Information
```bibtex
@misc{trager2022moral,
title={The Moral Foundations Reddit Corpus},
author={Jackson Trager and Alireza S. Ziabari and Aida Mostafazadeh Davani and Preni Golazazian and Farzan Karimi-Malekabadi and Ali Omrani and Zhihe Li and Brendan Kennedy and Nils Karl Reimer and Melissa Reyes and Kelsey Cheng and Mellow Wei and Christina Merrifield and Arta Khosravi and Evans Alvarez and Morteza Dehghani},
year={2022},
eprint={2208.05545},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
| USC-MOLA-Lab/MFRC | [
"arxiv:2208.05545",
"region:us"
] | 2022-08-10T14:11:55+00:00 | {} | 2022-08-25T23:36:03+00:00 | [
"2208.05545"
] | [] | TAGS
#arxiv-2208.05545 #region-us
| # Dataset Card for MFRC
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Reddit posts annotated for moral foundations
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- text
- subreddit
- bucket
- annotator
- annotation
- confidence
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
cc-by-4.0
### Contributions
| [
"# Dataset Card for MFRC",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nReddit posts annotated for moral foundations",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- text\n- subreddit\n- bucket \n- annotator\n- annotation\n- confidence",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\ncc-by-4.0",
"### Contributions"
] | [
"TAGS\n#arxiv-2208.05545 #region-us \n",
"# Dataset Card for MFRC",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nReddit posts annotated for moral foundations",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- text\n- subreddit\n- bucket \n- annotator\n- annotation\n- confidence",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\ncc-by-4.0",
"### Contributions"
] | [
14,
7,
125,
24,
15,
10,
5,
6,
6,
23,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
12,
5
] | [
"passage: TAGS\n#arxiv-2208.05545 #region-us \n# Dataset Card for MFRC## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nReddit posts annotated for moral foundations### Supported Tasks and Leaderboards### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields\n\n- text\n- subreddit\n- bucket \n- annotator\n- annotation\n- confidence### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\ncc-by-4.0### Contributions"
] |
2328dd311177458120231b103ac782bae1844c0a | Subset of CORD-19 for rapid prototyping of ideas in vector encodings and Weaviate. | CShorten/CORD-19-prototype | [
"region:us"
] | 2022-08-10T14:21:31+00:00 | {} | 2022-08-11T19:49:11+00:00 | [] | [] | TAGS
#region-us
| Subset of CORD-19 for rapid prototyping of ideas in vector encodings and Weaviate. | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
ffcbba9a8234249d2f89c0e828415cbc81d52428 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: datien228/distilbart-cnn-12-6-ftn-multi_news
* Dataset: multi_news
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ccdv](https://huggingface.co/ccdv) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-multi_news-416d7689-12805701 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-10T17:42:10+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["multi_news"], "eval_info": {"task": "summarization", "model": "datien228/distilbart-cnn-12-6-ftn-multi_news", "metrics": [], "dataset_name": "multi_news", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-08-10T18:11:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: datien228/distilbart-cnn-12-6-ftn-multi_news
* Dataset: multi_news
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ccdv for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: datien228/distilbart-cnn-12-6-ftn-multi_news\n* Dataset: multi_news\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ccdv for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: datien228/distilbart-cnn-12-6-ftn-multi_news\n* Dataset: multi_news\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ccdv for evaluating this model."
] | [
13,
95,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: datien228/distilbart-cnn-12-6-ftn-multi_news\n* Dataset: multi_news\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ccdv for evaluating this model."
] |
306b7c093b3d71163e9e7d0a6f4bd572fefca7ae | 'Conversational bot' | Meowren/CapBot | [
"region:us"
] | 2022-08-10T17:51:53+00:00 | {} | 2022-08-10T17:58:02+00:00 | [] | [] | TAGS
#region-us
| 'Conversational bot' | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
bc3f171abbbb710747eae427baa77802fd057887 |
# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus
*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus.
CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:
- *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.
- *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.
Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.
In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.
Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets.
# Load the data
The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from [Common Voice v4.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0) separately, and join them by the file names.
```py
from datasets import load_dataset
# Load only ar-en and ja-en language pairs. Omitting the `languages` argument
# would load all the language pairs.
cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja'])
# Print the structure of the dataset.
print(cvss_c)
```
# License
CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.
## Citation
Please cite this paper when referencing the CVSS corpus:
```
@inproceedings{jia2022cvss,
title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation},
author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga},
booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)},
pages={6691--6703},
year={2022}
}
```
| google/cvss | [
"language:en",
"language:ar",
"language:ca",
"language:cy",
"language:de",
"language:es",
"language:et",
"language:fa",
"language:fr",
"language:id",
"language:it",
"language:ja",
"language:lv",
"language:mn",
"language:nl",
"language:pt",
"language:ru",
"language:sl",
"language:sv",
"language:ta",
"language:tr",
"language:zh",
"license:cc-by-4.0",
"arxiv:2201.03713",
"region:us"
] | 2022-08-10T23:54:54+00:00 | {"language": ["en", "ar", "ca", "cy", "de", "es", "et", "fa", "fr", "id", "it", "ja", "lv", "mn", "nl", "pt", "ru", "sl", "sv", "ta", "tr", "zh"], "license": "cc-by-4.0"} | 2024-02-10T04:34:53+00:00 | [
"2201.03713"
] | [
"en",
"ar",
"ca",
"cy",
"de",
"es",
"et",
"fa",
"fr",
"id",
"it",
"ja",
"lv",
"mn",
"nl",
"pt",
"ru",
"sl",
"sv",
"ta",
"tr",
"zh"
] | TAGS
#language-English #language-Arabic #language-Catalan #language-Welsh #language-German #language-Spanish #language-Estonian #language-Persian #language-French #language-Indonesian #language-Italian #language-Japanese #language-Latvian #language-Mongolian #language-Dutch #language-Portuguese #language-Russian #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Chinese #license-cc-by-4.0 #arxiv-2201.03713 #region-us
|
# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus
*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the Common Voice speech corpus and the CoVoST 2 speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the LibriTTS corpus.
CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:
- *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.
- *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.
Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.
In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.
Please check out our paper for the detailed description of this corpus, as well as the baseline models we trained on both datasets.
# Load the data
The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from Common Voice v4.0 separately, and join them by the file names.
# License
CVSS is released under the very permissive Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Please cite this paper when referencing the CVSS corpus:
| [
"# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus\n\n\n*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the Common Voice speech corpus and the CoVoST 2 speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the LibriTTS corpus.\n\nCVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:\n\n - *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.\n\n - *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.\n\nTogether with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.\n\nIn addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.\n\nPlease check out our paper for the detailed description of this corpus, as well as the baseline models we trained on both datasets.",
"# Load the data\n\n\nThe following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from Common Voice v4.0 separately, and join them by the file names.",
"# License\n\nCVSS is released under the very permissive Creative Commons Attribution 4.0 International (CC BY 4.0) license.\n\n\nPlease cite this paper when referencing the CVSS corpus:"
] | [
"TAGS\n#language-English #language-Arabic #language-Catalan #language-Welsh #language-German #language-Spanish #language-Estonian #language-Persian #language-French #language-Indonesian #language-Italian #language-Japanese #language-Latvian #language-Mongolian #language-Dutch #language-Portuguese #language-Russian #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Chinese #license-cc-by-4.0 #arxiv-2201.03713 #region-us \n",
"# CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus\n\n\n*CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the Common Voice speech corpus and the CoVoST 2 speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the LibriTTS corpus.\n\nCVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values:\n\n - *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications.\n\n - *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages.\n\nTogether with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech.\n\nIn addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation.\n\nPlease check out our paper for the detailed description of this corpus, as well as the baseline models we trained on both datasets.",
"# Load the data\n\n\nThe following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from Common Voice v4.0 separately, and join them by the file names.",
"# License\n\nCVSS is released under the very permissive Creative Commons Attribution 4.0 International (CC BY 4.0) license.\n\n\nPlease cite this paper when referencing the CVSS corpus:"
] | [
140,
436,
72,
35
] | [
"passage: TAGS\n#language-English #language-Arabic #language-Catalan #language-Welsh #language-German #language-Spanish #language-Estonian #language-Persian #language-French #language-Indonesian #language-Italian #language-Japanese #language-Latvian #language-Mongolian #language-Dutch #language-Portuguese #language-Russian #language-Slovenian #language-Swedish #language-Tamil #language-Turkish #language-Chinese #license-cc-by-4.0 #arxiv-2201.03713 #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.