sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
ee253755850abc7b010d78a9fe5ff3df16380d15 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | vovadevico/fashion-layers | [
"task_categories:image-classification",
"size_categories:n<1K",
"license:unknown",
"region:us"
] | 2024-01-31T22:56:59+00:00 | {"license": "unknown", "size_categories": ["n<1K"], "task_categories": ["image-classification"]} | 2024-02-01T00:36:56+00:00 | [] | [] | TAGS
#task_categories-image-classification #size_categories-n<1K #license-unknown #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#task_categories-image-classification #size_categories-n<1K #license-unknown #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
3e1420f4997c46b871f0436635750f9fc6623459 |
# ColorSwap: A Color and Word Order Dataset for Multimodal Evaluation
## Dataset Description
ColorSwap is a dataset designed to assess and improve the proficiency of multimodal models in matching objects with their colors. The dataset is comprised of 2,000 unique image-caption pairs, grouped into 1,000 examples. Each example includes a caption-image pair, along with a "color-swapped" pair. Crucially, the two captions in an example have the same words, but the color words have been rearranged to modify different objects. The dataset was created through a novel blend of automated caption and image generation with humans in the loop.
Paper: Coming soon!
## Usage
You can download the dataset directly from the Hugging Face API with the following code:
```python
from datasets import load_dataset
dataset = load_dataset("stanfordnlp/colorswap", use_auth_token=True)
```
Please make sure to install the `datasets` library and use the `use_auth_token` parameter to authenticate with the Hugging Face API.
An example of the dataset is as follows:
```python
[
{
'id': 0,
'image_1': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1024x1024 at 0x14D908B20>,
'image_2': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=1024x1024 at 0x14D9DCE20>,
'caption_1': 'someone holding a yellow umbrella wearing a white dress',
'caption_2': 'someone holding a white umbrella wearing a yellow dress',
'image_source': 'midjourney',
'caption_source': 'human'
}
...
]
```
## Evaluations
[This Google Colab](https://colab.research.google.com/drive/1EWPsSklfq49WiX2nUyOTmKZftU0AC4YL?usp=sharing) showcases our ITM model evaluations.
Please refer to our Github repository for the VLM evaluations: [ColorSwap](https://github.com/Top34051/colorswap).
## Citation
If you find our work useful, please cite the following paper:
```
@article{burapacheep2024colorswap,
author = {Jirayu Burapacheep and Ishan Gaur and Agam Bhatia and Tristan Thrush},
title = {ColorSwap: A Color and Word Order Dataset for Multimodal Evaluation},
journal = {arXiv},
year = {2024},
}
```
| stanfordnlp/colorswap | [
"license:mit",
"region:us"
] | 2024-02-01T00:00:16+00:00 | {"license": ["mit"], "dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "image_1", "dtype": "image"}, {"name": "image_2", "dtype": "image"}, {"name": "caption_1", "dtype": "string"}, {"name": "caption_2", "dtype": "string"}, {"name": "image_source", "dtype": "string"}, {"name": "caption_source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 300541, "num_examples": 700}, {"name": "test", "num_bytes": 128623, "num_examples": 300}], "download_size": 2762991931, "dataset_size": 429164}} | 2024-02-06T22:23:20+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# ColorSwap: A Color and Word Order Dataset for Multimodal Evaluation
## Dataset Description
ColorSwap is a dataset designed to assess and improve the proficiency of multimodal models in matching objects with their colors. The dataset is comprised of 2,000 unique image-caption pairs, grouped into 1,000 examples. Each example includes a caption-image pair, along with a "color-swapped" pair. Crucially, the two captions in an example have the same words, but the color words have been rearranged to modify different objects. The dataset was created through a novel blend of automated caption and image generation with humans in the loop.
Paper: Coming soon!
## Usage
You can download the dataset directly from the Hugging Face API with the following code:
Please make sure to install the 'datasets' library and use the 'use_auth_token' parameter to authenticate with the Hugging Face API.
An example of the dataset is as follows:
## Evaluations
This Google Colab showcases our ITM model evaluations.
Please refer to our Github repository for the VLM evaluations: ColorSwap.
If you find our work useful, please cite the following paper:
| [
"# ColorSwap: A Color and Word Order Dataset for Multimodal Evaluation",
"## Dataset Description\n\nColorSwap is a dataset designed to assess and improve the proficiency of multimodal models in matching objects with their colors. The dataset is comprised of 2,000 unique image-caption pairs, grouped into 1,000 examples. Each example includes a caption-image pair, along with a \"color-swapped\" pair. Crucially, the two captions in an example have the same words, but the color words have been rearranged to modify different objects. The dataset was created through a novel blend of automated caption and image generation with humans in the loop. \n\nPaper: Coming soon!",
"## Usage\n\nYou can download the dataset directly from the Hugging Face API with the following code:\n\n\n\nPlease make sure to install the 'datasets' library and use the 'use_auth_token' parameter to authenticate with the Hugging Face API.\n\nAn example of the dataset is as follows:",
"## Evaluations\n\nThis Google Colab showcases our ITM model evaluations.\n\nPlease refer to our Github repository for the VLM evaluations: ColorSwap.\n\nIf you find our work useful, please cite the following paper:"
] | [
"TAGS\n#license-mit #region-us \n",
"# ColorSwap: A Color and Word Order Dataset for Multimodal Evaluation",
"## Dataset Description\n\nColorSwap is a dataset designed to assess and improve the proficiency of multimodal models in matching objects with their colors. The dataset is comprised of 2,000 unique image-caption pairs, grouped into 1,000 examples. Each example includes a caption-image pair, along with a \"color-swapped\" pair. Crucially, the two captions in an example have the same words, but the color words have been rearranged to modify different objects. The dataset was created through a novel blend of automated caption and image generation with humans in the loop. \n\nPaper: Coming soon!",
"## Usage\n\nYou can download the dataset directly from the Hugging Face API with the following code:\n\n\n\nPlease make sure to install the 'datasets' library and use the 'use_auth_token' parameter to authenticate with the Hugging Face API.\n\nAn example of the dataset is as follows:",
"## Evaluations\n\nThis Google Colab showcases our ITM model evaluations.\n\nPlease refer to our Github repository for the VLM evaluations: ColorSwap.\n\nIf you find our work useful, please cite the following paper:"
] |
7eaee3c92eb70701f798db2ac4447c03c3133ca2 |
Medical question and answer dataset of 450 responses related to common diseases, including definitions of diseases, symptoms, treatments, prevention methods, references and expected duration.
| Chinyemba/medical-QA | [
"license:mit",
"region:us"
] | 2024-02-01T00:24:10+00:00 | {"license": "mit"} | 2024-02-01T00:34:36+00:00 | [] | [] | TAGS
#license-mit #region-us
|
Medical question and answer dataset of 450 responses related to common diseases, including definitions of diseases, symptoms, treatments, prevention methods, references and expected duration.
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
4d7b6663e9020c05593b0d30fd61e0b959ee2a40 | Combination of several datasets in Spanish in huggingface, formatted to sharegpt | Arconte/spanish-sharegpt-60k | [
"region:us"
] | 2024-02-01T00:36:02+00:00 | {} | 2024-02-01T01:53:40+00:00 | [] | [] | TAGS
#region-us
| Combination of several datasets in Spanish in huggingface, formatted to sharegpt | [] | [
"TAGS\n#region-us \n"
] |
2be4046b853842b95c43c7979796bce0a4941891 | # Dataset Card for "coco-2014-instance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aha-org/coco-2014-instance | [
"task_categories:object-detection",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"coco",
"region:us"
] | 2024-02-01T01:10:01+00:00 | {"license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["object-detection"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "annotations", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float32"}}, {"name": "categories", "sequence": {"class_label": {"names": {"0": "person", "1": "bicycle", "2": "car", "3": "motorcycle", "4": "airplane", "5": "bus", "6": "train", "7": "truck", "8": "boat", "9": "traffic light", "10": "fire hydrant", "11": "stop sign", "12": "parking meter", "13": "bench", "14": "bird", "15": "cat", "16": "dog", "17": "horse", "18": "sheep", "19": "cow", "20": "elephant", "21": "bear", "22": "zebra", "23": "giraffe", "24": "backpack", "25": "umbrella", "26": "handbag", "27": "tie", "28": "suitcase", "29": "frisbee", "30": "skis", "31": "snowboard", "32": "sports ball", "33": "kite", "34": "baseball bat", "35": "baseball glove", "36": "skateboard", "37": "surfboard", "38": "tennis racket", "39": "bottle", "40": "wine glass", "41": "cup", "42": "fork", "43": "knife", "44": "spoon", "45": "bowl", "46": "banana", "47": "apple", "48": "sandwich", "49": "orange", "50": "broccoli", "51": "carrot", "52": "hot dog", "53": "pizza", "54": "donut", "55": "cake", "56": "chair", "57": "couch", "58": "potted plant", "59": "bed", "60": "dining table", "61": "toilet", "62": "tv", "63": "laptop", "64": "mouse", "65": "remote", "66": "keyboard", "67": "cell phone", "68": "microwave", "69": "oven", "70": "toaster", "71": "sink", "72": "refrigerator", "73": "book", "74": "clock", "75": "vase", "76": "scissors", "77": "teddy bear", "78": "hair drier", "79": "toothbrush"}}}}, {"name": "area", "sequence": "float32"}, {"name": "iscrowd", "sequence": "bool"}]}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "license", "dtype": {"class_label": {"names": {"0": "Attribution-NonCommercial-ShareAlike License", "1": "Attribution-NonCommercial License", "2": "Attribution-NonCommercial-NoDerivs License", "3": "Attribution License", "4": "Attribution-ShareAlike License", "5": "Attribution-NoDerivs License", "6": "No known", "7": "United States Government Work"}}}}, {"name": "coco_url", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13784509594.309, "num_examples": 82081}, {"name": "validation", "num_bytes": 6877258108.769, "num_examples": 40137}, {"name": "test", "num_bytes": 6600156203.075, "num_examples": 40775}], "download_size": 15299492466, "dataset_size": 27261923906.153}, "tags": ["coco"]} | 2024-02-01T02:37:56+00:00 | [] | [] | TAGS
#task_categories-object-detection #size_categories-100K<n<1M #license-cc-by-4.0 #coco #region-us
| # Dataset Card for "coco-2014-instance"
More Information needed | [
"# Dataset Card for \"coco-2014-instance\"\n\nMore Information needed"
] | [
"TAGS\n#task_categories-object-detection #size_categories-100K<n<1M #license-cc-by-4.0 #coco #region-us \n",
"# Dataset Card for \"coco-2014-instance\"\n\nMore Information needed"
] |
60ea2575779f7ad31fa25f2c66f856263255cc2a |
Knowledge Pile is a knowledge-related data leveraging [Query of CC](https://arxiv.org/abs/2401.14624).
Our data is currently undergoing organization and verification processes and will be released coming soon. | ngc7293/Knowledge_Pile | [
"language:en",
"license:apache-2.0",
"knowledge",
"cc",
"Retrieval",
"Reasoning",
"arxiv:2401.14624",
"region:us"
] | 2024-02-01T01:25:20+00:00 | {"language": ["en"], "license": "apache-2.0", "tags": ["knowledge", "cc", "Retrieval", "Reasoning"]} | 2024-02-16T10:58:05+00:00 | [
"2401.14624"
] | [
"en"
] | TAGS
#language-English #license-apache-2.0 #knowledge #cc #Retrieval #Reasoning #arxiv-2401.14624 #region-us
|
Knowledge Pile is a knowledge-related data leveraging Query of CC.
Our data is currently undergoing organization and verification processes and will be released coming soon. | [] | [
"TAGS\n#language-English #license-apache-2.0 #knowledge #cc #Retrieval #Reasoning #arxiv-2401.14624 #region-us \n"
] |
a7796c5e984bd12247a4be9bf75d88b69932e90f |
# Model Library DB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/spaces/nicholasKluge/Model-Library
- **Repository:** https://github.com/Nkluge-correa/ModelLibrary
- **Point of Contact:** [AIRES at PUCRS]([email protected])
### Dataset Summary
The Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created. This is the database for the [Model Library](https://huggingface.co/spaces/nicholasKluge/Model-Library).
### Supported Tasks and Leaderboards
This dataset serves as a catalog of machine learning models, all displayed in the [Model Library](https://huggingface.co/spaces/nicholasKluge/Model-Library).
### Languages
English.
## Dataset Structure
### Data Instances
Features available are: `model_name_string, model_name_url, model_size_string, dataset, data_type, research_field, risks_and_limitations, risk_types,publication_date, organization_and_url, institution_type, country, license, paper_name_url, model_description, organization_info`.
### Data Fields
Read [Data Instances](#data-instances).
### Data Splits
"main" slipt is the current version displayed in the [Model Library](https://huggingface.co/spaces/nicholasKluge/Model-Library).
## Dataset Creation
### Curation Rationale
This dataset is maintained as part of a research project to catalog risks related to ML models.
### Source Data
#### Initial Data Collection and Normalization
All data was collected manually.
#### Who are the source language producers?
More information can be found [here](https://github.com/Nkluge-correa/Model-Library/blob/main/INSTRUCTIONS.md).
### Annotations
#### Annotation process
More information can be found [here](https://github.com/Nkluge-correa/Model-Library/blob/main/INSTRUCTIONS.md).
#### Who are the annotators?
Members of the [AI Robotics Ethics Society](https://www.theaires.org/) (AIRES).
### Personal and Sensitive Information
No personal or sensitive information is part of this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
No considerations.
### Discussion of Biases
No considerations.
### Other Known Limitations
No considerations.
## Additional Information
### Dataset Curators
Members of the [AI Robotics Ethics Society](https://www.theaires.org/) (AIRES).
### Licensing Information
This dataset is licensed under the [Apache License, version 2.0](LICENSE).
### Citation Information
```latex
@misc{correa24library,
author = {Nicholas Kluge Corr{\^e}a and Faizah Naqvi and Robayet Rossain},
title = {Model Library},
year = {2024},
howpublished = {\url{https://github.com/Nkluge-correa/Model-Library}}
}
```
### Contributions
If you would like to add a model, read our documentation and submit a PR on [GitHub](https://github.com/Nkluge-correa/Model-Library)!
| nicholasKluge/model-library | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2024-02-01T01:38:51+00:00 | {"language": ["en"], "license": "apache-2.0", "pretty_name": "Model Library", "dataset_info": {"features": [{"name": "model_name_string", "dtype": "string"}, {"name": "model_name_url", "dtype": "string"}, {"name": "model_size_string", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "data_type", "dtype": "string"}, {"name": "research_field", "dtype": "string"}, {"name": "risks_and_limitations", "dtype": "string"}, {"name": "risk_types", "dtype": "string"}, {"name": "publication_date", "dtype": "string"}, {"name": "organization_and_url", "dtype": "string"}, {"name": "institution_type", "dtype": "float64"}, {"name": "country", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "paper_name_url", "dtype": "string"}, {"name": "model_description", "dtype": "string"}, {"name": "organization_info", "dtype": "string"}], "splits": [{"name": "main", "num_bytes": 82323, "num_examples": 35}], "download_size": 52749, "dataset_size": 82323}, "configs": [{"config_name": "default", "data_files": [{"split": "main", "path": "data/main-*"}]}]} | 2024-02-15T18:15:11+00:00 | [] | [
"en"
] | TAGS
#language-English #license-apache-2.0 #region-us
|
# Model Library DB
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Point of Contact: AIRES at PUCRS
### Dataset Summary
The Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created. This is the database for the Model Library.
### Supported Tasks and Leaderboards
This dataset serves as a catalog of machine learning models, all displayed in the Model Library.
### Languages
English.
## Dataset Structure
### Data Instances
Features available are: 'model_name_string, model_name_url, model_size_string, dataset, data_type, research_field, risks_and_limitations, risk_types,publication_date, organization_and_url, institution_type, country, license, paper_name_url, model_description, organization_info'.
### Data Fields
Read Data Instances.
### Data Splits
"main" slipt is the current version displayed in the Model Library.
## Dataset Creation
### Curation Rationale
This dataset is maintained as part of a research project to catalog risks related to ML models.
### Source Data
#### Initial Data Collection and Normalization
All data was collected manually.
#### Who are the source language producers?
More information can be found here.
### Annotations
#### Annotation process
More information can be found here.
#### Who are the annotators?
Members of the AI Robotics Ethics Society (AIRES).
### Personal and Sensitive Information
No personal or sensitive information is part of this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
No considerations.
### Discussion of Biases
No considerations.
### Other Known Limitations
No considerations.
## Additional Information
### Dataset Curators
Members of the AI Robotics Ethics Society (AIRES).
### Licensing Information
This dataset is licensed under the Apache License, version 2.0.
### Contributions
If you would like to add a model, read our documentation and submit a PR on GitHub!
| [
"# Model Library DB",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: AIRES at PUCRS",
"### Dataset Summary\n\nThe Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created. This is the database for the Model Library.",
"### Supported Tasks and Leaderboards\n\nThis dataset serves as a catalog of machine learning models, all displayed in the Model Library.",
"### Languages\n\nEnglish.",
"## Dataset Structure",
"### Data Instances\n\nFeatures available are: 'model_name_string, model_name_url, model_size_string, dataset, data_type, research_field, risks_and_limitations, risk_types,publication_date, organization_and_url, institution_type, country, license, paper_name_url, model_description, organization_info'.",
"### Data Fields\n\nRead Data Instances.",
"### Data Splits\n\n\"main\" slipt is the current version displayed in the Model Library.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset is maintained as part of a research project to catalog risks related to ML models.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAll data was collected manually.",
"#### Who are the source language producers?\n\nMore information can be found here.",
"### Annotations",
"#### Annotation process\n\nMore information can be found here.",
"#### Who are the annotators?\n\nMembers of the AI Robotics Ethics Society (AIRES).",
"### Personal and Sensitive Information\n\nNo personal or sensitive information is part of this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nNo considerations.",
"### Discussion of Biases\n\nNo considerations.",
"### Other Known Limitations\n\nNo considerations.",
"## Additional Information",
"### Dataset Curators\n\nMembers of the AI Robotics Ethics Society (AIRES).",
"### Licensing Information\n\nThis dataset is licensed under the Apache License, version 2.0.",
"### Contributions\n\nIf you would like to add a model, read our documentation and submit a PR on GitHub!"
] | [
"TAGS\n#language-English #license-apache-2.0 #region-us \n",
"# Model Library DB",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: AIRES at PUCRS",
"### Dataset Summary\n\nThe Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created. This is the database for the Model Library.",
"### Supported Tasks and Leaderboards\n\nThis dataset serves as a catalog of machine learning models, all displayed in the Model Library.",
"### Languages\n\nEnglish.",
"## Dataset Structure",
"### Data Instances\n\nFeatures available are: 'model_name_string, model_name_url, model_size_string, dataset, data_type, research_field, risks_and_limitations, risk_types,publication_date, organization_and_url, institution_type, country, license, paper_name_url, model_description, organization_info'.",
"### Data Fields\n\nRead Data Instances.",
"### Data Splits\n\n\"main\" slipt is the current version displayed in the Model Library.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset is maintained as part of a research project to catalog risks related to ML models.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAll data was collected manually.",
"#### Who are the source language producers?\n\nMore information can be found here.",
"### Annotations",
"#### Annotation process\n\nMore information can be found here.",
"#### Who are the annotators?\n\nMembers of the AI Robotics Ethics Society (AIRES).",
"### Personal and Sensitive Information\n\nNo personal or sensitive information is part of this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nNo considerations.",
"### Discussion of Biases\n\nNo considerations.",
"### Other Known Limitations\n\nNo considerations.",
"## Additional Information",
"### Dataset Curators\n\nMembers of the AI Robotics Ethics Society (AIRES).",
"### Licensing Information\n\nThis dataset is licensed under the Apache License, version 2.0.",
"### Contributions\n\nIf you would like to add a model, read our documentation and submit a PR on GitHub!"
] |
ae5ef6246ba8a4043255f9e24d3d2128e684ea95 | # Dataset Card for "cybersec_embedding_llama_chat_another"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Yemmy1000/cybersec_embedding_llama_chat_another | [
"region:us"
] | 2024-02-01T01:44:03+00:00 | {"dataset_info": {"features": [{"name": "INSTRUCTION", "dtype": "string"}, {"name": "RESPONSE", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5750270.64103804, "num_examples": 7697}], "download_size": 2742402, "dataset_size": 5750270.64103804}} | 2024-02-01T01:44:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cybersec_embedding_llama_chat_another"
More Information needed | [
"# Dataset Card for \"cybersec_embedding_llama_chat_another\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cybersec_embedding_llama_chat_another\"\n\nMore Information needed"
] |
17339ffe1a660fd83e3269552582c1e08e990cab | # VMMFC-30K Dataset
VMMFC-30K is a dataset available on Hugging Face that focuses on vision and multi-modal function calling samples. This dataset is specifically designed to train Language Learning Models (LLMs) for tool usage, API calls, and parallel tool usage.
## Datasets Included
-----------------
The VMMFC-30K dataset consists of the following datasets:
1. AgoraX
2. VMMFC-3OK
## Dataset Structure
-----------------
The dataset contains the following key features:
- `image_bytes`: This feature represents the image data in the form of bytes.
- `image_path`: This feature contains the path of the image file.
- `caption`: This feature includes the caption associated with the image.
- `fn_call`: This feature contains the function call information.
## Files and Versions
------------------
- The dataset is available in Parquet format.
- Version: Auto-converted
## Dataset Viewer and API
----------------------
The dataset can be accessed and explored using the following methods:
- Dataset Viewer: [Go to dataset viewer](https://huggingface.co/datasets/AgoraX/VMMFC-3OK/)
- API: [API documentation](https://huggingface.co/datasets/AgoraX/VMMFC-3OK/)
## Community
---------
For any questions, discussions, or contributions related to the VMMFC-30K dataset, please visit the [Hugging Face community page](https://huggingface.co/datasets/AgoraX/link_to_community_page).
## Settings
--------
The dataset settings and configurations can be modified and adjusted based on specific requirements.
## Dataset Card
------------
Available dataset card information:
- [Dataset card](https://huggingface.co/datasets/AgoraX/VMMFC-3OK/) | AgoraX/VMMFC-3OK | [
"region:us"
] | 2024-02-01T02:46:39+00:00 | {} | 2024-02-01T18:54:15+00:00 | [] | [] | TAGS
#region-us
| # VMMFC-30K Dataset
VMMFC-30K is a dataset available on Hugging Face that focuses on vision and multi-modal function calling samples. This dataset is specifically designed to train Language Learning Models (LLMs) for tool usage, API calls, and parallel tool usage.
## Datasets Included
-----------------
The VMMFC-30K dataset consists of the following datasets:
1. AgoraX
2. VMMFC-3OK
## Dataset Structure
-----------------
The dataset contains the following key features:
- 'image_bytes': This feature represents the image data in the form of bytes.
- 'image_path': This feature contains the path of the image file.
- 'caption': This feature includes the caption associated with the image.
- 'fn_call': This feature contains the function call information.
## Files and Versions
------------------
- The dataset is available in Parquet format.
- Version: Auto-converted
## Dataset Viewer and API
----------------------
The dataset can be accessed and explored using the following methods:
- Dataset Viewer: Go to dataset viewer
- API: API documentation
## Community
---------
For any questions, discussions, or contributions related to the VMMFC-30K dataset, please visit the Hugging Face community page.
## Settings
--------
The dataset settings and configurations can be modified and adjusted based on specific requirements.
## Dataset Card
------------
Available dataset card information:
- Dataset card | [
"# VMMFC-30K Dataset\n\nVMMFC-30K is a dataset available on Hugging Face that focuses on vision and multi-modal function calling samples. This dataset is specifically designed to train Language Learning Models (LLMs) for tool usage, API calls, and parallel tool usage.",
"## Datasets Included\n-----------------\n\nThe VMMFC-30K dataset consists of the following datasets:\n\n1. AgoraX\n2. VMMFC-3OK",
"## Dataset Structure\n-----------------\n\nThe dataset contains the following key features:\n\n- 'image_bytes': This feature represents the image data in the form of bytes.\n- 'image_path': This feature contains the path of the image file.\n- 'caption': This feature includes the caption associated with the image.\n- 'fn_call': This feature contains the function call information.",
"## Files and Versions\n------------------\n\n- The dataset is available in Parquet format.\n- Version: Auto-converted",
"## Dataset Viewer and API\n----------------------\n\nThe dataset can be accessed and explored using the following methods:\n\n- Dataset Viewer: Go to dataset viewer\n- API: API documentation",
"## Community\n---------\n\nFor any questions, discussions, or contributions related to the VMMFC-30K dataset, please visit the Hugging Face community page.",
"## Settings\n--------\n\nThe dataset settings and configurations can be modified and adjusted based on specific requirements.",
"## Dataset Card\n------------\n\nAvailable dataset card information:\n\n- Dataset card"
] | [
"TAGS\n#region-us \n",
"# VMMFC-30K Dataset\n\nVMMFC-30K is a dataset available on Hugging Face that focuses on vision and multi-modal function calling samples. This dataset is specifically designed to train Language Learning Models (LLMs) for tool usage, API calls, and parallel tool usage.",
"## Datasets Included\n-----------------\n\nThe VMMFC-30K dataset consists of the following datasets:\n\n1. AgoraX\n2. VMMFC-3OK",
"## Dataset Structure\n-----------------\n\nThe dataset contains the following key features:\n\n- 'image_bytes': This feature represents the image data in the form of bytes.\n- 'image_path': This feature contains the path of the image file.\n- 'caption': This feature includes the caption associated with the image.\n- 'fn_call': This feature contains the function call information.",
"## Files and Versions\n------------------\n\n- The dataset is available in Parquet format.\n- Version: Auto-converted",
"## Dataset Viewer and API\n----------------------\n\nThe dataset can be accessed and explored using the following methods:\n\n- Dataset Viewer: Go to dataset viewer\n- API: API documentation",
"## Community\n---------\n\nFor any questions, discussions, or contributions related to the VMMFC-30K dataset, please visit the Hugging Face community page.",
"## Settings\n--------\n\nThe dataset settings and configurations can be modified and adjusted based on specific requirements.",
"## Dataset Card\n------------\n\nAvailable dataset card information:\n\n- Dataset card"
] |
bdb5afbfe7d0b5261a5fdf34745df7e0f3ff4c06 | # Dataset Card for "coco-2017-instance"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aha-org/coco-2017-instance | [
"region:us"
] | 2024-02-01T03:25:45+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "annotations", "dtype": "image"}, {"name": "objects", "struct": [{"name": "bbox", "sequence": {"sequence": "float32"}}, {"name": "categories", "sequence": {"class_label": {"names": {"0": "person", "1": "bicycle", "2": "car", "3": "motorcycle", "4": "airplane", "5": "bus", "6": "train", "7": "truck", "8": "boat", "9": "traffic light", "10": "fire hydrant", "11": "stop sign", "12": "parking meter", "13": "bench", "14": "bird", "15": "cat", "16": "dog", "17": "horse", "18": "sheep", "19": "cow", "20": "elephant", "21": "bear", "22": "zebra", "23": "giraffe", "24": "backpack", "25": "umbrella", "26": "handbag", "27": "tie", "28": "suitcase", "29": "frisbee", "30": "skis", "31": "snowboard", "32": "sports ball", "33": "kite", "34": "baseball bat", "35": "baseball glove", "36": "skateboard", "37": "surfboard", "38": "tennis racket", "39": "bottle", "40": "wine glass", "41": "cup", "42": "fork", "43": "knife", "44": "spoon", "45": "bowl", "46": "banana", "47": "apple", "48": "sandwich", "49": "orange", "50": "broccoli", "51": "carrot", "52": "hot dog", "53": "pizza", "54": "donut", "55": "cake", "56": "chair", "57": "couch", "58": "potted plant", "59": "bed", "60": "dining table", "61": "toilet", "62": "tv", "63": "laptop", "64": "mouse", "65": "remote", "66": "keyboard", "67": "cell phone", "68": "microwave", "69": "oven", "70": "toaster", "71": "sink", "72": "refrigerator", "73": "book", "74": "clock", "75": "vase", "76": "scissors", "77": "teddy bear", "78": "hair drier", "79": "toothbrush"}}}}, {"name": "area", "sequence": "float32"}, {"name": "iscrowd", "sequence": "bool"}]}, {"name": "height", "dtype": "int64"}, {"name": "width", "dtype": "int64"}, {"name": "date_captured", "dtype": "string"}, {"name": "license", "dtype": {"class_label": {"names": {"0": "Attribution-NonCommercial-ShareAlike License", "1": "Attribution-NonCommercial License", "2": "Attribution-NonCommercial-NoDerivs License", "3": "Attribution License", "4": "Attribution-ShareAlike License", "5": "Attribution-NoDerivs License", "6": "No known", "7": "United States Government Work"}}}}, {"name": "coco_url", "dtype": "string"}, {"name": "flickr_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19938543197.152, "num_examples": 117266}, {"name": "validation", "num_bytes": 811750075.52, "num_examples": 4952}, {"name": "test", "num_bytes": 6492416417.58, "num_examples": 40670}], "download_size": 26960423195, "dataset_size": 27242709690.252}} | 2024-02-01T04:06:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "coco-2017-instance"
More Information needed | [
"# Dataset Card for \"coco-2017-instance\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"coco-2017-instance\"\n\nMore Information needed"
] |
feec086e9a039dd5cf84db10b88bc18292ea7a9e |
# angle UAE pairs
loaded the four datasets containing pairs for [Universal AnglE Embeddings](https://github.com/SeanLee97/AnglE/tree/main/examples/UAE)
- note that `qrecc` is not included in this dataset
```
multi_nli (train set)
snli (train set)
qqp (train set)
mrpc (train set)
``` | BEE-spoke-data/angle-UAE-pairs | [
"task_categories:sentence-similarity",
"task_categories:feature-extraction",
"language:en",
"license:odc-by",
"region:us"
] | 2024-02-01T03:28:44+00:00 | {"language": ["en"], "license": "odc-by", "task_categories": ["sentence-similarity", "feature-extraction"], "dataset_info": {"features": [{"name": "text1", "dtype": "string"}, {"name": "text2", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 189307831.0, "num_examples": 1310368}, {"name": "validation", "num_bytes": 6859317.0, "num_examples": 50838}, {"name": "test", "num_bytes": 55301665.0, "num_examples": 402690}], "download_size": 168093774, "dataset_size": 251468813.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-02-01T03:34:03+00:00 | [] | [
"en"
] | TAGS
#task_categories-sentence-similarity #task_categories-feature-extraction #language-English #license-odc-by #region-us
|
# angle UAE pairs
loaded the four datasets containing pairs for Universal AnglE Embeddings
- note that 'qrecc' is not included in this dataset
| [
"# angle UAE pairs\n\n\nloaded the four datasets containing pairs for Universal AnglE Embeddings\n- note that 'qrecc' is not included in this dataset"
] | [
"TAGS\n#task_categories-sentence-similarity #task_categories-feature-extraction #language-English #license-odc-by #region-us \n",
"# angle UAE pairs\n\n\nloaded the four datasets containing pairs for Universal AnglE Embeddings\n- note that 'qrecc' is not included in this dataset"
] |
105a60677b7d7d7fe0c92285b632ed68d136ef69 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
LGG Segmentation Dataset
This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks.
The images were obtained from The Cancer Imaging Archive (TCIA).
This dataset is the subset of llg-mri-segmentation <a href ="https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation">lgg-mri-segmentation</a>.
They correspond to 61 patients (16 CS: Case Western Reserve University & 45 DU: Duke University) included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available.
Tumor genomic clusters and patient data are provided in the <code>raw.csv</code> file.
All images are provided in <code>.tif</code> format with 3 channels per image. For 61 cases (16 CS: Case Western Reserve University & 45 DU: Duke University), 3 sequences are available, i.e. pre-contrast, FLAIR, and post-contrast (in this order of channels). Post-contrast sequence and pre-contrast sequence are missing in some cases. Missing sequences are replaced with FLAIR sequences to make all images 3-channel. Masks are binary, 1-channel images. They segment FLAIR abnormality in the FLAIR sequence (available for all cases).
The dataset is organized into 61 folders named after case ID containing source institution information.
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://github.com/mecxlan/TCGA_LGG_MriBraTS/]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
.tif images and a csv file
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
- **Source:** [https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=5309188]
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [https://github.com/mecxlan]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | mecxlan/TCGA_LGG_MriBraTS | [
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"biology",
"cancer",
"images",
"United States",
"Healthcare",
"region:us"
] | 2024-02-01T03:38:05+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["feature-extraction"], "pretty_name": "mecxlan", "tags": ["biology", "cancer", "images", "United States", "Healthcare"]} | 2024-02-01T05:49:54+00:00 | [] | [
"en"
] | TAGS
#task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #biology #cancer #images #United States #Healthcare #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
LGG Segmentation Dataset
This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks.
The images were obtained from The Cancer Imaging Archive (TCIA).
This dataset is the subset of llg-mri-segmentation <a href ="URL
They correspond to 61 patients (16 CS: Case Western Reserve University & 45 DU: Duke University) included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available.
Tumor genomic clusters and patient data are provided in the <code>URL</code> file.
All images are provided in <code>.tif</code> format with 3 channels per image. For 61 cases (16 CS: Case Western Reserve University & 45 DU: Duke University), 3 sequences are available, i.e. pre-contrast, FLAIR, and post-contrast (in this order of channels). Post-contrast sequence and pre-contrast sequence are missing in some cases. Missing sequences are replaced with FLAIR sequences to make all images 3-channel. Masks are binary, 1-channel images. They segment FLAIR abnormality in the FLAIR sequence (available for all cases).
The dataset is organized into 61 folders named after case ID containing source institution information.
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository: [URL
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
.tif images and a csv file
## Dataset Creation
### Curation Rationale
### Source Data
- Source: [URL
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [URL
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\nLGG Segmentation Dataset \nThis dataset contains brain MR images together with manual FLAIR abnormality segmentation masks. \nThe images were obtained from The Cancer Imaging Archive (TCIA). \nThis dataset is the subset of llg-mri-segmentation <a href =\"URL\n\nThey correspond to 61 patients (16 CS: Case Western Reserve University & 45 DU: Duke University) included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available. \nTumor genomic clusters and patient data are provided in the <code>URL</code> file.\n\nAll images are provided in <code>.tif</code> format with 3 channels per image. For 61 cases (16 CS: Case Western Reserve University & 45 DU: Duke University), 3 sequences are available, i.e. pre-contrast, FLAIR, and post-contrast (in this order of channels). Post-contrast sequence and pre-contrast sequence are missing in some cases. Missing sequences are replaced with FLAIR sequences to make all images 3-channel. Masks are binary, 1-channel images. They segment FLAIR abnormality in the FLAIR sequence (available for all cases).\n\nThe dataset is organized into 61 folders named after case ID containing source institution information.\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: [URL\n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure\n\n\n.tif images and a csv file",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\n\n\n - Source: [URL",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [URL",
"## Dataset Card Contact"
] | [
"TAGS\n#task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-cc-by-nc-4.0 #biology #cancer #images #United States #Healthcare #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\nLGG Segmentation Dataset \nThis dataset contains brain MR images together with manual FLAIR abnormality segmentation masks. \nThe images were obtained from The Cancer Imaging Archive (TCIA). \nThis dataset is the subset of llg-mri-segmentation <a href =\"URL\n\nThey correspond to 61 patients (16 CS: Case Western Reserve University & 45 DU: Duke University) included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available. \nTumor genomic clusters and patient data are provided in the <code>URL</code> file.\n\nAll images are provided in <code>.tif</code> format with 3 channels per image. For 61 cases (16 CS: Case Western Reserve University & 45 DU: Duke University), 3 sequences are available, i.e. pre-contrast, FLAIR, and post-contrast (in this order of channels). Post-contrast sequence and pre-contrast sequence are missing in some cases. Missing sequences are replaced with FLAIR sequences to make all images 3-channel. Masks are binary, 1-channel images. They segment FLAIR abnormality in the FLAIR sequence (available for all cases).\n\nThe dataset is organized into 61 folders named after case ID containing source institution information.\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: [URL\n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure\n\n\n.tif images and a csv file",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\n\n\n - Source: [URL",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [URL",
"## Dataset Card Contact"
] |
b555b74ff1fd54cf534f04b22f650ae9a7c431d1 | # Dataset Card for wbfns 2018
42 publicly-available document texts downloaded from the World Bank Documents and Report API.
## Dataset Details
### Dataset Description
42 World Bank document texts, related to Nutrition and food security, published in 2018. All documents are publicly available from the World Bank Project API, here: https://documents.worldbank.org/en/publication/documents-reports/api
- **License:** mit
## Uses
Intended to be used in very short text summarisation task.
### Out-of-Scope Use
Not intended to be used for any other purposes.
## Dataset Structure
"id" = World Bank document ID number.
"admreg" = Administrative region.
"count" = The country or countries covered by the document.
"docty" = The type of document, such as 'Project Paper' or 'Working Paper'.
"theme" = Comma-separated list of themes which the document pertains to.
"docdt" = Date on which the document was published.
"majdocty" = Document type according to main usage e.g. 'Project Documents'.
"pdfurl" = Public URL from which the PDF version of the document can be accessed.
"txturl" = Public URL from which the TXT version of the document can be accessed.
"url_friendly_title" = Public parent URL at which the document is hosted.
"projectid" = World Bank Project ID.
"url" = Alternate parent URL at the document is hosted.
"doc-text" = Contents of the 'txturl', above.
## Dataset Creation
### Curation Rationale
Serves as material for short sample exercise in text summarisation.
## Dataset Card Contact
[email protected] | lodeawb/wbfns | [
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"license:mit",
"natural-language-understanding",
"region:us"
] | 2024-02-01T03:58:26+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["summarization"], "tags": ["natural-language-understanding"]} | 2024-02-02T00:27:06+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #size_categories-n<1K #language-English #license-mit #natural-language-understanding #region-us
| # Dataset Card for wbfns 2018
42 publicly-available document texts downloaded from the World Bank Documents and Report API.
## Dataset Details
### Dataset Description
42 World Bank document texts, related to Nutrition and food security, published in 2018. All documents are publicly available from the World Bank Project API, here: URL
- License: mit
## Uses
Intended to be used in very short text summarisation task.
### Out-of-Scope Use
Not intended to be used for any other purposes.
## Dataset Structure
"id" = World Bank document ID number.
"admreg" = Administrative region.
"count" = The country or countries covered by the document.
"docty" = The type of document, such as 'Project Paper' or 'Working Paper'.
"theme" = Comma-separated list of themes which the document pertains to.
"docdt" = Date on which the document was published.
"majdocty" = Document type according to main usage e.g. 'Project Documents'.
"pdfurl" = Public URL from which the PDF version of the document can be accessed.
"txturl" = Public URL from which the TXT version of the document can be accessed.
"url_friendly_title" = Public parent URL at which the document is hosted.
"projectid" = World Bank Project ID.
"url" = Alternate parent URL at the document is hosted.
"doc-text" = Contents of the 'txturl', above.
## Dataset Creation
### Curation Rationale
Serves as material for short sample exercise in text summarisation.
## Dataset Card Contact
lodea@URL | [
"# Dataset Card for wbfns 2018 \n\n42 publicly-available document texts downloaded from the World Bank Documents and Report API.",
"## Dataset Details",
"### Dataset Description\n\n42 World Bank document texts, related to Nutrition and food security, published in 2018. All documents are publicly available from the World Bank Project API, here: URL\n\n- License: mit",
"## Uses\n\nIntended to be used in very short text summarisation task.",
"### Out-of-Scope Use\n\nNot intended to be used for any other purposes.",
"## Dataset Structure\n\n\"id\" = World Bank document ID number.\n\n\"admreg\" = Administrative region.\n\n\"count\" = The country or countries covered by the document.\n\n\"docty\" = The type of document, such as 'Project Paper' or 'Working Paper'.\n\n\"theme\" = Comma-separated list of themes which the document pertains to.\n\n\"docdt\" = Date on which the document was published.\n\n\"majdocty\" = Document type according to main usage e.g. 'Project Documents'.\n\n\"pdfurl\" = Public URL from which the PDF version of the document can be accessed. \n\n\"txturl\" = Public URL from which the TXT version of the document can be accessed. \n\n\"url_friendly_title\" = Public parent URL at which the document is hosted. \n\n\"projectid\" = World Bank Project ID.\n\n\"url\" = Alternate parent URL at the document is hosted.\n\n\"doc-text\" = Contents of the 'txturl', above.",
"## Dataset Creation",
"### Curation Rationale\n\nServes as material for short sample exercise in text summarisation.",
"## Dataset Card Contact\n\nlodea@URL"
] | [
"TAGS\n#task_categories-summarization #size_categories-n<1K #language-English #license-mit #natural-language-understanding #region-us \n",
"# Dataset Card for wbfns 2018 \n\n42 publicly-available document texts downloaded from the World Bank Documents and Report API.",
"## Dataset Details",
"### Dataset Description\n\n42 World Bank document texts, related to Nutrition and food security, published in 2018. All documents are publicly available from the World Bank Project API, here: URL\n\n- License: mit",
"## Uses\n\nIntended to be used in very short text summarisation task.",
"### Out-of-Scope Use\n\nNot intended to be used for any other purposes.",
"## Dataset Structure\n\n\"id\" = World Bank document ID number.\n\n\"admreg\" = Administrative region.\n\n\"count\" = The country or countries covered by the document.\n\n\"docty\" = The type of document, such as 'Project Paper' or 'Working Paper'.\n\n\"theme\" = Comma-separated list of themes which the document pertains to.\n\n\"docdt\" = Date on which the document was published.\n\n\"majdocty\" = Document type according to main usage e.g. 'Project Documents'.\n\n\"pdfurl\" = Public URL from which the PDF version of the document can be accessed. \n\n\"txturl\" = Public URL from which the TXT version of the document can be accessed. \n\n\"url_friendly_title\" = Public parent URL at which the document is hosted. \n\n\"projectid\" = World Bank Project ID.\n\n\"url\" = Alternate parent URL at the document is hosted.\n\n\"doc-text\" = Contents of the 'txturl', above.",
"## Dataset Creation",
"### Curation Rationale\n\nServes as material for short sample exercise in text summarisation.",
"## Dataset Card Contact\n\nlodea@URL"
] |
8710cd0f72f104e6af5c459795a12f28aff5c5b9 | There are about 35 raw kitchen ingredients in this dataset. Cheers | arjunpatel1234/Recipe_Ingredients | [
"license:unknown",
"region:us"
] | 2024-02-01T04:11:21+00:00 | {"license": "unknown"} | 2024-02-01T04:18:32+00:00 | [] | [] | TAGS
#license-unknown #region-us
| There are about 35 raw kitchen ingredients in this dataset. Cheers | [] | [
"TAGS\n#license-unknown #region-us \n"
] |
1f18d811c143269d6aec3535f4a068ec2a0f2a84 |
# Land application field trial data
### Intro
This dataset is a repository of results from our Land Application Detection Model trial with two organizations.
Land application is the process of disposing of agricultural animal waste by spraying it onto fields. [We developed a model](https://github.com/reglab/land-application-detection?tab=readme-ov-file) to detect these practices.
This dataset represents the results of a real world trial to verify and label these detected spreads.
### Data description
#### Structured data
- sent_to_wdnr.csv
- Each row is a detected spread that we forwarded to our partners at WDNR
- sent_to_elpc.csv
- Each row is a detected spread that we forwarded to our partners at ELPC
- wdnr_responses.csv
- Each row is a response to a detection from sent_to_wdnr.csv which contains a preliminary determination by WDNR staff as to whether the image looks like a spread and if it was determined to be likely spreading, the results of an investigation into said spread.
- elpc_responses_raw.csv
- Each row is a response to a detection from sent_to_elpc.csv which is the results of the ELPC investigation into that detection through the use of citizen volunteers verifiying in person.
- elpc_responses_clean.csv
- Same as the raw file but with corrected detection ids to deal with a data entry error.
#### Image data
- images/
- This directory contains .jpeg images of satellite data fed into the model that were sent to either of the partners. Images were captured by [Planet](https://www.planet.com/) using the PlanetScope sensor, visual spectrum 3m images.
## Citation
`@misc {stanford_regulation,_evaluation,_and_governance_lab_2024,
author = { {Stanford Regulation, Evaluation, and Governance Lab} },
title = { land-app-trial (Revision b3d0e11) },
year = 2024,
url = { https://huggingface.co/datasets/reglab/land-app-trial },
doi = { 10.57967/hf/1733 },
publisher = { Hugging Face }
}`
| reglab/land-app-trial | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"agriculture",
"environment",
"doi:10.57967/hf/1733",
"region:us"
] | 2024-02-01T04:34:35+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "tags": ["agriculture", "environment"]} | 2024-02-02T19:48:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-object-detection #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #agriculture #environment #doi-10.57967/hf/1733 #region-us
|
# Land application field trial data
### Intro
This dataset is a repository of results from our Land Application Detection Model trial with two organizations.
Land application is the process of disposing of agricultural animal waste by spraying it onto fields. We developed a model to detect these practices.
This dataset represents the results of a real world trial to verify and label these detected spreads.
### Data description
#### Structured data
- sent_to_wdnr.csv
- Each row is a detected spread that we forwarded to our partners at WDNR
- sent_to_elpc.csv
- Each row is a detected spread that we forwarded to our partners at ELPC
- wdnr_responses.csv
- Each row is a response to a detection from sent_to_wdnr.csv which contains a preliminary determination by WDNR staff as to whether the image looks like a spread and if it was determined to be likely spreading, the results of an investigation into said spread.
- elpc_responses_raw.csv
- Each row is a response to a detection from sent_to_elpc.csv which is the results of the ELPC investigation into that detection through the use of citizen volunteers verifiying in person.
- elpc_responses_clean.csv
- Same as the raw file but with corrected detection ids to deal with a data entry error.
#### Image data
- images/
- This directory contains .jpeg images of satellite data fed into the model that were sent to either of the partners. Images were captured by Planet using the PlanetScope sensor, visual spectrum 3m images.
'@misc {stanford_regulation,_evaluation,_and_governance_lab_2024,
author = { {Stanford Regulation, Evaluation, and Governance Lab} },
title = { land-app-trial (Revision b3d0e11) },
year = 2024,
url = { URL },
doi = { 10.57967/hf/1733 },
publisher = { Hugging Face }
}'
| [
"# Land application field trial data",
"### Intro\nThis dataset is a repository of results from our Land Application Detection Model trial with two organizations. \nLand application is the process of disposing of agricultural animal waste by spraying it onto fields. We developed a model to detect these practices. \nThis dataset represents the results of a real world trial to verify and label these detected spreads.",
"### Data description",
"#### Structured data\n- sent_to_wdnr.csv\n - Each row is a detected spread that we forwarded to our partners at WDNR\n- sent_to_elpc.csv\n - Each row is a detected spread that we forwarded to our partners at ELPC\n- wdnr_responses.csv\n - Each row is a response to a detection from sent_to_wdnr.csv which contains a preliminary determination by WDNR staff as to whether the image looks like a spread and if it was determined to be likely spreading, the results of an investigation into said spread.\n- elpc_responses_raw.csv\n - Each row is a response to a detection from sent_to_elpc.csv which is the results of the ELPC investigation into that detection through the use of citizen volunteers verifiying in person.\n- elpc_responses_clean.csv\n - Same as the raw file but with corrected detection ids to deal with a data entry error.",
"#### Image data\n- images/\n - This directory contains .jpeg images of satellite data fed into the model that were sent to either of the partners. Images were captured by Planet using the PlanetScope sensor, visual spectrum 3m images.\n\n\n'@misc {stanford_regulation,_evaluation,_and_governance_lab_2024,\n\tauthor = { {Stanford Regulation, Evaluation, and Governance Lab} },\n\ttitle = { land-app-trial (Revision b3d0e11) },\n\tyear = 2024,\n\turl = { URL },\n\tdoi = { 10.57967/hf/1733 },\n\tpublisher = { Hugging Face }\n}'"
] | [
"TAGS\n#task_categories-object-detection #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #agriculture #environment #doi-10.57967/hf/1733 #region-us \n",
"# Land application field trial data",
"### Intro\nThis dataset is a repository of results from our Land Application Detection Model trial with two organizations. \nLand application is the process of disposing of agricultural animal waste by spraying it onto fields. We developed a model to detect these practices. \nThis dataset represents the results of a real world trial to verify and label these detected spreads.",
"### Data description",
"#### Structured data\n- sent_to_wdnr.csv\n - Each row is a detected spread that we forwarded to our partners at WDNR\n- sent_to_elpc.csv\n - Each row is a detected spread that we forwarded to our partners at ELPC\n- wdnr_responses.csv\n - Each row is a response to a detection from sent_to_wdnr.csv which contains a preliminary determination by WDNR staff as to whether the image looks like a spread and if it was determined to be likely spreading, the results of an investigation into said spread.\n- elpc_responses_raw.csv\n - Each row is a response to a detection from sent_to_elpc.csv which is the results of the ELPC investigation into that detection through the use of citizen volunteers verifiying in person.\n- elpc_responses_clean.csv\n - Same as the raw file but with corrected detection ids to deal with a data entry error.",
"#### Image data\n- images/\n - This directory contains .jpeg images of satellite data fed into the model that were sent to either of the partners. Images were captured by Planet using the PlanetScope sensor, visual spectrum 3m images.\n\n\n'@misc {stanford_regulation,_evaluation,_and_governance_lab_2024,\n\tauthor = { {Stanford Regulation, Evaluation, and Governance Lab} },\n\ttitle = { land-app-trial (Revision b3d0e11) },\n\tyear = 2024,\n\turl = { URL },\n\tdoi = { 10.57967/hf/1733 },\n\tpublisher = { Hugging Face }\n}'"
] |
12d810f8087bc77a81f5f38e72627969a87595fd |
# Atget Paris Collection
Welcome to the Atget Paris Collection, a carefully assembled dataset of public domain images by the esteemed photographer Eugène Atget, depicting Paris in the 1900s. This dataset, enhanced with captions generated by GPT-Vision, is designed for training AI models in recognizing and interpreting historical urban imagery.
[](https://discord.com/invite/m3TBB9XEkb)
## Dataset Overview
- **Content**: This collection features 31 meticulously chosen images that capture the essence of Parisian life, architecture, and streetscapes as seen through Atget's lens over a century ago. The range of photographs includes storefronts, cobbled streets, and ornate doorways, offering a rich variety of scenes.
- **Source**: The original photographs are part of the public domain, accessed from the National Gallery of Art. The dataset has been curated with an added value of descriptive captions, making it a unique resource for AI training.
- **Usage**: This dataset is intended for use in training AI models for tasks such as historical photo analysis, pattern recognition in urban settings, and artistic image synthesis.
## Licensing
- The original images by Eugène Atget, sourced from the National Gallery of Art, are in the public domain. However, this curated dataset, including the accompanying GPT-Vision generated captions, is provided under the Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) license. This licensing facilitates non-commercial use with appropriate credit and prohibits commercial usage.
- For more details on this license, please visit [CC BY-NC 2.0 License details](https://creativecommons.org/licenses/by-nc/2.0/).
## Dataset Composition
Each image in the collection is paired with a caption that has been optimized for AI training, including token shuffling, to enhance learning efficiency. This thoughtful combination of historic images and modern AI technology creates a valuable tool for developers and researchers.
## How to Use the Collection
1. **Download the Collection**: Access and download the collection via the provided link for non-commercial AI model training purposes.
2. **Review Images and Captions**: Engage with the collection to appreciate the diverse urban scenarios and detailed captions.
3. **Implement in AI Training**: Employ the dataset in training your AI models, taking advantage of the nuanced captions for advanced historical image comprehension.
## Contributions and Feedback
We value your insights and contributions. Should you have any feedback or desire to contribute additional images or captions to the collection, please reach out to us. Your participation aids in the continual enhancement of this dataset for the AI and historical research communities.
## Related
https://blib.la/blog/crafting-the-future-blibla-s-ethical-approach-to-ai-model-training
---
The Atget Paris Collection is a distinctive asset for advancing AI capabilities in historical image understanding. We trust it will serve as a significant resource in your AI endeavors. | Blib-la/eugene_atget_dataset | [
"license:cc-by-nc-2.0",
"region:us"
] | 2024-02-01T04:49:25+00:00 | {"license": "cc-by-nc-2.0", "viewer": false} | 2024-02-01T10:44:43+00:00 | [] | [] | TAGS
#license-cc-by-nc-2.0 #region-us
|
# Atget Paris Collection
Welcome to the Atget Paris Collection, a carefully assembled dataset of public domain images by the esteemed photographer Eugène Atget, depicting Paris in the 1900s. This dataset, enhanced with captions generated by GPT-Vision, is designed for training AI models in recognizing and interpreting historical urban imagery.
 license. This licensing facilitates non-commercial use with appropriate credit and prohibits commercial usage.
- For more details on this license, please visit CC BY-NC 2.0 License details.
## Dataset Composition
Each image in the collection is paired with a caption that has been optimized for AI training, including token shuffling, to enhance learning efficiency. This thoughtful combination of historic images and modern AI technology creates a valuable tool for developers and researchers.
## How to Use the Collection
1. Download the Collection: Access and download the collection via the provided link for non-commercial AI model training purposes.
2. Review Images and Captions: Engage with the collection to appreciate the diverse urban scenarios and detailed captions.
3. Implement in AI Training: Employ the dataset in training your AI models, taking advantage of the nuanced captions for advanced historical image comprehension.
## Contributions and Feedback
We value your insights and contributions. Should you have any feedback or desire to contribute additional images or captions to the collection, please reach out to us. Your participation aids in the continual enhancement of this dataset for the AI and historical research communities.
## Related
URL
---
The Atget Paris Collection is a distinctive asset for advancing AI capabilities in historical image understanding. We trust it will serve as a significant resource in your AI endeavors. | [
"# Atget Paris Collection\n\nWelcome to the Atget Paris Collection, a carefully assembled dataset of public domain images by the esteemed photographer Eugène Atget, depicting Paris in the 1900s. This dataset, enhanced with captions generated by GPT-Vision, is designed for training AI models in recognizing and interpreting historical urban imagery.\n\n license. This licensing facilitates non-commercial use with appropriate credit and prohibits commercial usage.\n- For more details on this license, please visit CC BY-NC 2.0 License details.",
"## Dataset Composition\n\nEach image in the collection is paired with a caption that has been optimized for AI training, including token shuffling, to enhance learning efficiency. This thoughtful combination of historic images and modern AI technology creates a valuable tool for developers and researchers.",
"## How to Use the Collection\n\n1. Download the Collection: Access and download the collection via the provided link for non-commercial AI model training purposes.\n2. Review Images and Captions: Engage with the collection to appreciate the diverse urban scenarios and detailed captions.\n3. Implement in AI Training: Employ the dataset in training your AI models, taking advantage of the nuanced captions for advanced historical image comprehension.",
"## Contributions and Feedback\n\nWe value your insights and contributions. Should you have any feedback or desire to contribute additional images or captions to the collection, please reach out to us. Your participation aids in the continual enhancement of this dataset for the AI and historical research communities.",
"## Related\n\nURL\n\n---\n\nThe Atget Paris Collection is a distinctive asset for advancing AI capabilities in historical image understanding. We trust it will serve as a significant resource in your AI endeavors."
] | [
"TAGS\n#license-cc-by-nc-2.0 #region-us \n",
"# Atget Paris Collection\n\nWelcome to the Atget Paris Collection, a carefully assembled dataset of public domain images by the esteemed photographer Eugène Atget, depicting Paris in the 1900s. This dataset, enhanced with captions generated by GPT-Vision, is designed for training AI models in recognizing and interpreting historical urban imagery.\n\n license. This licensing facilitates non-commercial use with appropriate credit and prohibits commercial usage.\n- For more details on this license, please visit CC BY-NC 2.0 License details.",
"## Dataset Composition\n\nEach image in the collection is paired with a caption that has been optimized for AI training, including token shuffling, to enhance learning efficiency. This thoughtful combination of historic images and modern AI technology creates a valuable tool for developers and researchers.",
"## How to Use the Collection\n\n1. Download the Collection: Access and download the collection via the provided link for non-commercial AI model training purposes.\n2. Review Images and Captions: Engage with the collection to appreciate the diverse urban scenarios and detailed captions.\n3. Implement in AI Training: Employ the dataset in training your AI models, taking advantage of the nuanced captions for advanced historical image comprehension.",
"## Contributions and Feedback\n\nWe value your insights and contributions. Should you have any feedback or desire to contribute additional images or captions to the collection, please reach out to us. Your participation aids in the continual enhancement of this dataset for the AI and historical research communities.",
"## Related\n\nURL\n\n---\n\nThe Atget Paris Collection is a distinctive asset for advancing AI capabilities in historical image understanding. We trust it will serve as a significant resource in your AI endeavors."
] |
0a5b3268d422c6d2f7781a70a1f2797dd3168649 |
## Description
This dataset is a combined version of two separate datasets: [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized) and [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized). Both datasets contain Telugu text that has been filtered and romanized.
## Credits
- The dataset [yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized) is provided by Telugu-LLM-Labs.
- The dataset [teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized) is provided by Telugu-LLM-Labs.
| indiehackers/telugu_instruction_dataset | [
"region:us"
] | 2024-02-01T05:05:25+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "alpaca", "num_bytes": 33750562, "num_examples": 28910}, {"name": "teknium", "num_bytes": 35037736, "num_examples": 43614}, {"name": "train", "num_bytes": 68788298.0, "num_examples": 72524}], "download_size": 78368487, "dataset_size": 137576596.0}, "configs": [{"config_name": "default", "data_files": [{"split": "alpaca", "path": "data/alpaca-*"}, {"split": "teknium", "path": "data/teknium-*"}, {"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T06:16:00+00:00 | [] | [] | TAGS
#region-us
|
## Description
This dataset is a combined version of two separate datasets: yahma_alpaca_cleaned_telugu_filtered_and_romanized and teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized. Both datasets contain Telugu text that has been filtered and romanized.
## Credits
- The dataset yahma_alpaca_cleaned_telugu_filtered_and_romanized is provided by Telugu-LLM-Labs.
- The dataset teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized is provided by Telugu-LLM-Labs.
| [
"## Description\nThis dataset is a combined version of two separate datasets: yahma_alpaca_cleaned_telugu_filtered_and_romanized and teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized. Both datasets contain Telugu text that has been filtered and romanized.",
"## Credits\n- The dataset yahma_alpaca_cleaned_telugu_filtered_and_romanized is provided by Telugu-LLM-Labs.\n- The dataset teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized is provided by Telugu-LLM-Labs."
] | [
"TAGS\n#region-us \n",
"## Description\nThis dataset is a combined version of two separate datasets: yahma_alpaca_cleaned_telugu_filtered_and_romanized and teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized. Both datasets contain Telugu text that has been filtered and romanized.",
"## Credits\n- The dataset yahma_alpaca_cleaned_telugu_filtered_and_romanized is provided by Telugu-LLM-Labs.\n- The dataset teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized is provided by Telugu-LLM-Labs."
] |
283d7d1b097b4973c9ed9574a3bde9e6bdb491ef |
身份证ocr识别 证件提取矫正 验证码自动化 <a href="https://github.com/CCCpan/Gebaini"> 模型获取 </a>
To obtain free models for identity card (ID) OCR (Optical Character Recognition) recognition, you can explore various open-source platforms and repositories such as GitHub, Model Zoo, or specific frameworks' model hubs like TensorFlow Hub or PyTorch Hub. ID OCR recognition models are designed to extract text from identity cards, including personal details like name, ID number, date of birth, and other relevant information. These models are trained on diverse datasets to accurately recognize and extract text from various ID card formats and designs.
<a href="https://github.com/CCCpan/Gebaini"> Click on me free access </a>

| cpans/idcard_name | [
"license:apache-2.0",
"code",
"region:us"
] | 2024-02-01T05:10:37+00:00 | {"license": "apache-2.0", "datasets": ["cpans/idcard_name"], "metrics": ["accuracy"], "pipeline_tag": "image-to-text", "tags": ["code"]} | 2024-02-01T05:25:10+00:00 | [] | [] | TAGS
#license-apache-2.0 #code #region-us
|
身份证ocr识别 证件提取矫正 验证码自动化 <a href="URL 模型获取 </a>
To obtain free models for identity card (ID) OCR (Optical Character Recognition) recognition, you can explore various open-source platforms and repositories such as GitHub, Model Zoo, or specific frameworks' model hubs like TensorFlow Hub or PyTorch Hub. ID OCR recognition models are designed to extract text from identity cards, including personal details like name, ID number, date of birth, and other relevant information. These models are trained on diverse datasets to accurately recognize and extract text from various ID card formats and designs.
<a href="URL Click on me free access </a>
!image/png
| [] | [
"TAGS\n#license-apache-2.0 #code #region-us \n"
] |
fa4b4c94381d6a9c776ef78536e9f982e2d16415 | # 中文錯字糾正資料集
由規則與字典自維基百科產生的錯誤糾正資料集。
包含錯誤類型:隨機錯字、近似音錯字、缺字錯誤、冗字錯誤。
- alpha: 95%錯誤,5%不變。單句中可能有多個錯誤。
- beta: 50%錯誤,50%不變。僅有一個錯誤。
- gamma: 100%錯誤。單句中可能有多個錯誤。 | p208p2002/zhtw-sentence-error-correction | [
"language:zh",
"region:us"
] | 2024-02-01T05:27:13+00:00 | {"language": ["zh"], "configs": [{"config_name": "alpha", "data_files": [{"split": "train", "path": "alpha/out.jsonl"}]}, {"config_name": "beta", "data_files": [{"split": "train", "path": "beta/out.jsonl"}]}, {"config_name": "gamma", "data_files": [{"split": "train", "path": "gamma/out.jsonl"}]}]} | 2024-02-05T06:23:22+00:00 | [] | [
"zh"
] | TAGS
#language-Chinese #region-us
| # 中文錯字糾正資料集
由規則與字典自維基百科產生的錯誤糾正資料集。
包含錯誤類型:隨機錯字、近似音錯字、缺字錯誤、冗字錯誤。
- alpha: 95%錯誤,5%不變。單句中可能有多個錯誤。
- beta: 50%錯誤,50%不變。僅有一個錯誤。
- gamma: 100%錯誤。單句中可能有多個錯誤。 | [
"# 中文錯字糾正資料集\n\n由規則與字典自維基百科產生的錯誤糾正資料集。\n\n包含錯誤類型:隨機錯字、近似音錯字、缺字錯誤、冗字錯誤。\n\n- alpha: 95%錯誤,5%不變。單句中可能有多個錯誤。\n- beta: 50%錯誤,50%不變。僅有一個錯誤。\n- gamma: 100%錯誤。單句中可能有多個錯誤。"
] | [
"TAGS\n#language-Chinese #region-us \n",
"# 中文錯字糾正資料集\n\n由規則與字典自維基百科產生的錯誤糾正資料集。\n\n包含錯誤類型:隨機錯字、近似音錯字、缺字錯誤、冗字錯誤。\n\n- alpha: 95%錯誤,5%不變。單句中可能有多個錯誤。\n- beta: 50%錯誤,50%不變。僅有一個錯誤。\n- gamma: 100%錯誤。單句中可能有多個錯誤。"
] |
2dd4206a992cd7a475e1ce2e82b708b014319152 |
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
Acoustic models, trained on this data set, are available at [icefall](https://github.com/k2-fsa/icefall/tree/master/egs/librispeech) and language models, suitable for evaluation can be found at [openslr](http://www.openslr.org/11/).
For more information, see the paper "LibriSpeech: an ASR corpus based on public domain audio books", Vassil Panayotov, Guoguo Chen, Daniel Povey and Sanjeev Khudanpur, ICASSP 2015 [pdf](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) | k2-fsa/LibriSpeech | [
"license:apache-2.0",
"region:us"
] | 2024-02-01T05:40:44+00:00 | {"license": "apache-2.0"} | 2024-02-01T06:55:42+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
Acoustic models, trained on this data set, are available at icefall and language models, suitable for evaluation can be found at openslr.
For more information, see the paper "LibriSpeech: an ASR corpus based on public domain audio books", Vassil Panayotov, Guoguo Chen, Daniel Povey and Sanjeev Khudanpur, ICASSP 2015 pdf | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
9c4130bee0951265a9a34361f40d3f7763a73ce6 | # Dataset Card for "lmind_nq_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_v1_reciteonly_qa | [
"region:us"
] | 2024-02-01T05:50:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 222533, "num_examples": 300}, {"name": "validation", "num_bytes": 73368, "num_examples": 100}], "download_size": 196555, "dataset_size": 295901}} | 2024-02-01T05:50:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_v1_reciteonly_qa\"\n\nMore Information needed"
] |
2e4255369a8bb0c1e2f0c281d4b3bd051743ff28 |
The training set of [DIV8K](https://competitions.codalab.org/competitions/22217#participate).
## Citation
```bibtex
@inproceedings{gu2019div8k,
title={Div8k: Diverse 8k resolution image dataset},
author={Gu, Shuhang and Lugmayr, Andreas and Danelljan, Martin and Fritsche, Manuel and Lamour, Julien and Timofte, Radu},
booktitle={ICCVW},
year={2019},
}
``` | Iceclear/DIV8K_TrainingSet | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"license:apache-2.0",
"region:us"
] | 2024-02-01T06:22:16+00:00 | {"license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-to-image"]} | 2024-02-01T10:16:40+00:00 | [] | [] | TAGS
#task_categories-image-to-image #size_categories-1K<n<10K #license-apache-2.0 #region-us
|
The training set of DIV8K.
| [] | [
"TAGS\n#task_categories-image-to-image #size_categories-1K<n<10K #license-apache-2.0 #region-us \n"
] |
42fdd60c023900aaa72504e885978be0420ffc88 | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | fuyter07/arccosmosytu | [
"language:tr",
"region:us"
] | 2024-02-01T06:51:18+00:00 | {"language": ["tr"]} | 2024-02-01T06:55:13+00:00 | [] | [
"tr"
] | TAGS
#language-Turkish #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#language-Turkish #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
6caf3afed95d2fdc57d91ef7c5aef368fbbb6973 |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/HuggingFace7/ddpm-butterflies-128/tensorboard?#scalars)
| zivzhong/ddpm-butterflies-128 | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2024-02-01T07:22:57+00:00 | {"language": "en", "license": "apache-2.0", "library_name": "diffusers", "tags": [], "datasets": "huggan/smithsonian_butterflies_subset", "metrics": []} | 2024-02-01T07:32:46+00:00 | [] | [
"en"
] | TAGS
#language-English #license-apache-2.0 #region-us
|
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the Diffusers library
on the 'huggan/smithsonian_butterflies_subset' dataset.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
TensorBoard logs
| [
"# ddpm-butterflies-128",
"## Model description\n\nThis diffusion model is trained with the Diffusers library \non the 'huggan/smithsonian_butterflies_subset' dataset.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training data\n\n[TODO: describe the data used to train the model]",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 16\n- gradient_accumulation_steps: 1\n- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None\n- lr_scheduler: None\n- lr_warmup_steps: 500\n- ema_inv_gamma: None\n- ema_inv_gamma: None\n- ema_inv_gamma: None\n- mixed_precision: fp16",
"### Training results\n\n TensorBoard logs"
] | [
"TAGS\n#language-English #license-apache-2.0 #region-us \n",
"# ddpm-butterflies-128",
"## Model description\n\nThis diffusion model is trained with the Diffusers library \non the 'huggan/smithsonian_butterflies_subset' dataset.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training data\n\n[TODO: describe the data used to train the model]",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 16\n- gradient_accumulation_steps: 1\n- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None\n- lr_scheduler: None\n- lr_warmup_steps: 500\n- ema_inv_gamma: None\n- ema_inv_gamma: None\n- ema_inv_gamma: None\n- mixed_precision: fp16",
"### Training results\n\n TensorBoard logs"
] |
0fe5feb0fa52a8e9fe86b5decd4a55b79f2a8dcd |
> 睡不着的夜晚和不想睡觉的夜晚
## ⚠️注意
- **请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。**
- **本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。** 我不拥有 `scenario-raw` 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。
- 按照数据预处理的先后次序,依次是:`scenario-raw` 里是 krkr2 脚本源文件,`scenario` 里是清理后的结构化脚本,`conversation` 里是我主观分段制作的对话格式数据。
- 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 [manual_seg-progress.md](manual_seg-progress.md)。
- 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。
- 🔑 压缩包已加密,解压密码是 yorunohitsuji
## 感谢数据源背后的汉化组们
- 与小萝莉相思相爱:脸肿汉化组
- 勾指婚约洛丽塔:守夜人汉化组&7/9工作室
- 与小萝莉相思相爱的生活:脸肿汉化组
- 同居恋人洛丽塔:守夜人汉化组
- 双子洛丽塔后宫:靴下汉化组x仓库汉化组
- 爱欲姐妹洛丽塔:守夜人汉化组
- 诱惑自大洛丽塔:守夜人汉化组
- 每日亲吻洛丽塔:比喵个人汉化
## 给我自己看的预处理流程
0. 各作的脚本提取出来放在 `scenario-raw/` 里,用 `script/transcode.sh` 转成 UTF-8,`2015-sssa` 额外需要 `script/dos2unix.sh` 转成 LF
1. 修复格式小问题 `cd scenario-raw && bash patch.sh`
2. 运行 `python ks-parse-all.py` 得到 `scenario/`
3. 分段,再转成 `conversation/`
a. 自动分段:`python -m segment.auto path/to/scenario.jsonl`
b. 手动分段后,`python -m segment.manual path/to/scenario-manual_seg.jsonl`
添加新卷:
0. 脚本放在 `scenario-raw/` 里
1. 在 `ks-parse-all.py` 里添加新卷的元数据
| nenekochan/yoruno-vn | [
"task_categories:conversational",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language:zh",
"license:cc-by-nc-4.0",
"not-for-all-audiences",
"region:us"
] | 2024-02-01T07:53:41+00:00 | {"annotations_creators": ["expert-generated", "machine-generated"], "language": ["zh"], "license": "cc-by-nc-4.0", "task_categories": ["conversational", "text-generation"], "pretty_name": "\u591c\u7f8aL\u7cfb\u5217\u7b80\u4e2d\u811a\u672c", "language_details": "zho_Hans", "tags": ["not-for-all-audiences"]} | 2024-02-08T06:59:05+00:00 | [] | [
"zh"
] | TAGS
#task_categories-conversational #task_categories-text-generation #annotations_creators-expert-generated #annotations_creators-machine-generated #language-Chinese #license-cc-by-nc-4.0 #not-for-all-audiences #region-us
|
> 睡不着的夜晚和不想睡觉的夜晚
## ️注意
- 请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。
- 本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。 我不拥有 'scenario-raw' 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。
- 按照数据预处理的先后次序,依次是:'scenario-raw' 里是 krkr2 脚本源文件,'scenario' 里是清理后的结构化脚本,'conversation' 里是我主观分段制作的对话格式数据。
- 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 manual_seg-URL。
- 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。
- 压缩包已加密,解压密码是 yorunohitsuji
## 感谢数据源背后的汉化组们
- 与小萝莉相思相爱:脸肿汉化组
- 勾指婚约洛丽塔:守夜人汉化组&7/9工作室
- 与小萝莉相思相爱的生活:脸肿汉化组
- 同居恋人洛丽塔:守夜人汉化组
- 双子洛丽塔后宫:靴下汉化组x仓库汉化组
- 爱欲姐妹洛丽塔:守夜人汉化组
- 诱惑自大洛丽塔:守夜人汉化组
- 每日亲吻洛丽塔:比喵个人汉化
## 给我自己看的预处理流程
0. 各作的脚本提取出来放在 'scenario-raw/' 里,用 'script/URL' 转成 UTF-8,'2015-sssa' 额外需要 'script/URL' 转成 LF
1. 修复格式小问题 'cd scenario-raw && bash URL'
2. 运行 'python URL' 得到 'scenario/'
3. 分段,再转成 'conversation/'
a. 自动分段:'python -m URL path/to/URL'
b. 手动分段后,'python -m URL path/to/scenario-manual_seg.jsonl'
添加新卷:
0. 脚本放在 'scenario-raw/' 里
1. 在 'URL' 里添加新卷的元数据
| [
"## ️注意\n\n- 请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。\n- 本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。 我不拥有 'scenario-raw' 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。\n- 按照数据预处理的先后次序,依次是:'scenario-raw' 里是 krkr2 脚本源文件,'scenario' 里是清理后的结构化脚本,'conversation' 里是我主观分段制作的对话格式数据。\n- 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 manual_seg-URL。\n- 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。\n- 压缩包已加密,解压密码是 yorunohitsuji",
"## 感谢数据源背后的汉化组们\n\n- 与小萝莉相思相爱:脸肿汉化组\n- 勾指婚约洛丽塔:守夜人汉化组&7/9工作室\n- 与小萝莉相思相爱的生活:脸肿汉化组\n- 同居恋人洛丽塔:守夜人汉化组\n- 双子洛丽塔后宫:靴下汉化组x仓库汉化组\n- 爱欲姐妹洛丽塔:守夜人汉化组\n- 诱惑自大洛丽塔:守夜人汉化组\n- 每日亲吻洛丽塔:比喵个人汉化",
"## 给我自己看的预处理流程\n\n0. 各作的脚本提取出来放在 'scenario-raw/' 里,用 'script/URL' 转成 UTF-8,'2015-sssa' 额外需要 'script/URL' 转成 LF\n1. 修复格式小问题 'cd scenario-raw && bash URL'\n2. 运行 'python URL' 得到 'scenario/'\n3. 分段,再转成 'conversation/'\n a. 自动分段:'python -m URL path/to/URL'\n b. 手动分段后,'python -m URL path/to/scenario-manual_seg.jsonl'\n\n添加新卷:\n\n0. 脚本放在 'scenario-raw/' 里\n1. 在 'URL' 里添加新卷的元数据"
] | [
"TAGS\n#task_categories-conversational #task_categories-text-generation #annotations_creators-expert-generated #annotations_creators-machine-generated #language-Chinese #license-cc-by-nc-4.0 #not-for-all-audiences #region-us \n",
"## ️注意\n\n- 请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。\n- 本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。 我不拥有 'scenario-raw' 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。\n- 按照数据预处理的先后次序,依次是:'scenario-raw' 里是 krkr2 脚本源文件,'scenario' 里是清理后的结构化脚本,'conversation' 里是我主观分段制作的对话格式数据。\n- 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 manual_seg-URL。\n- 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。\n- 压缩包已加密,解压密码是 yorunohitsuji",
"## 感谢数据源背后的汉化组们\n\n- 与小萝莉相思相爱:脸肿汉化组\n- 勾指婚约洛丽塔:守夜人汉化组&7/9工作室\n- 与小萝莉相思相爱的生活:脸肿汉化组\n- 同居恋人洛丽塔:守夜人汉化组\n- 双子洛丽塔后宫:靴下汉化组x仓库汉化组\n- 爱欲姐妹洛丽塔:守夜人汉化组\n- 诱惑自大洛丽塔:守夜人汉化组\n- 每日亲吻洛丽塔:比喵个人汉化",
"## 给我自己看的预处理流程\n\n0. 各作的脚本提取出来放在 'scenario-raw/' 里,用 'script/URL' 转成 UTF-8,'2015-sssa' 额外需要 'script/URL' 转成 LF\n1. 修复格式小问题 'cd scenario-raw && bash URL'\n2. 运行 'python URL' 得到 'scenario/'\n3. 分段,再转成 'conversation/'\n a. 自动分段:'python -m URL path/to/URL'\n b. 手动分段后,'python -m URL path/to/scenario-manual_seg.jsonl'\n\n添加新卷:\n\n0. 脚本放在 'scenario-raw/' 里\n1. 在 'URL' 里添加新卷的元数据"
] |
33727e1862268ca91cb576b545ce3b8b3ecabcad | # Dataset Card for "medical_instruction_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jtatman/medical_instruction_format | [
"region:us"
] | 2024-02-01T07:59:05+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 115532598, "num_examples": 47122}], "download_size": 53812365, "dataset_size": 115532598}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T07:59:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "medical_instruction_format"
More Information needed | [
"# Dataset Card for \"medical_instruction_format\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"medical_instruction_format\"\n\nMore Information needed"
] |
8e3554e40cf75bc0ee63af275f6b0d14402336be | # Celebrity 1000
Top 1000 celebrities. 18,184 images. 256x256. Square cropped to face. | ares1123/celebrity_dataset | [
"region:us"
] | 2024-02-01T08:09:21+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Aaron Eckhart", "1": "Aaron Paul", "2": "Aaron Rodgers", "3": "Aaron Taylor-Johnson", "4": "Abbi Jacobson", "5": "Abhishek Bachchan", "6": "Abigail Breslin", "7": "Abigail Spencer", "8": "Adam Brody", "9": "Adam Devine", "10": "Adam Driver", "11": "Adam Lambert", "12": "Adam Levine", "13": "Adam Sandler", "14": "Adam Scott", "15": "Adele", "16": "Adrian Grenier", "17": "Ad\u00e8le Exarchopoulos", "18": "Aidan Gillen", "19": "Aidan Turner", "20": "Aishwarya Rai", "21": "Aja Naomi King", "22": "Alden Ehrenreich", "23": "Aldis Hodge", "24": "Alec Baldwin", "25": "Alex Morgan", "26": "Alex Pettyfer", "27": "Alex Rodriguez", "28": "Alexander Skarsg\u00e5rd", "29": "Alexandra Daddario", "30": "Alfre Woodard", "31": "Alia Shawkat", "32": "Alice Braga", "33": "Alice Eve", "34": "Alicia Keys", "35": "Alicia Vikander", "36": "Alison Brie", "37": "Allison Janney", "38": "Allison Williams", "39": "Alyson Hannigan", "40": "Amanda Peet", "41": "Amanda Seyfried", "42": "Amandla Stenberg", "43": "Amber Heard", "44": "America Ferrera", "45": "Amy Adams", "46": "Amy Poehler", "47": "Amy Schumer", "48": "Ana de Armas", "49": "Andie MacDowell", "50": "Andrew Garfield", "51": "Andrew Lincoln", "52": "Andrew Scott", "53": "Andy Garcia", "54": "Andy Samberg", "55": "Andy Serkis", "56": "Angela Bassett", "57": "Angelina Jolie", "58": "Anna Camp", "59": "Anna Faris", "60": "Anna Kendrick", "61": "Anna Paquin", "62": "AnnaSophia Robb", "63": "Annabelle Wallis", "64": "Anne Hathaway", "65": "Anne Marie", "66": "Anne-Marie", "67": "Ansel Elgort", "68": "Anson Mount", "69": "Anthony Hopkins", "70": "Anthony Joshua", "71": "Anthony Mackie", "72": "Antonio Banderas", "73": "Anya Taylor-Joy", "74": "Ariana Grande", "75": "Armie Hammer", "76": "Ashley Judd", "77": "Ashton Kutcher", "78": "Aubrey Plaza", "79": "Auli'i Cravalho", "80": "Awkwafina", "81": "Barack Obama", "82": "Bella Hadid", "83": "Bella Thorne", "84": "Ben Barnes", "85": "Ben Mendelsohn", "86": "Ben Stiller", "87": "Ben Whishaw", "88": "Benedict Cumberbatch", "89": "Benedict Wong", "90": "Benicio del Toro", "91": "Bill Gates", "92": "Bill Hader", "93": "Bill Murray", "94": "Bill Pullman", "95": "Bill Skarsg\u00e5rd", "96": "Billie Eilish", "97": "Billie Lourd", "98": "Billy Crudup", "99": "Billy Porter", "100": "Blake Lively", "101": "Bob Odenkirk", "102": "Bonnie Wright", "103": "Boyd Holbrook", "104": "Brad Pitt", "105": "Bradley Cooper", "106": "Brendan Fraser", "107": "Brian Cox", "108": "Brie Larson", "109": "Brittany Snow", "110": "Bryan Cranston", "111": "Bryce Dallas Howard", "112": "Busy Philipps", "113": "Caitriona Balfe", "114": "Cameron Diaz", "115": "Camila Cabello", "116": "Camila Mendes", "117": "Cardi B", "118": "Carey Mulligan", "119": "Carla Gugino", "120": "Carrie Underwood", "121": "Casey Affleck", "122": "Cate Blanchett", "123": "Catherine Keener", "124": "Catherine Zeta-Jones", "125": "Celine Dion", "126": "Chace Crawford", "127": "Chadwick Boseman", "128": "Channing Tatum", "129": "Charlie Cox", "130": "Charlie Day", "131": "Charlie Hunnam", "132": "Charlie Plummer", "133": "Charlize Theron", "134": "Chiara Ferragni", "135": "Chiwetel Ejiofor", "136": "Chloe Bennet", "137": "Chloe Grace Moretz", "138": "Chloe Sevigny", "139": "Chlo\u00eb Grace Moretz", "140": "Chlo\u00eb Sevigny", "141": "Chris Cooper", "142": "Chris Evans", "143": "Chris Hemsworth", "144": "Chris Martin", "145": "Chris Messina", "146": "Chris Noth", "147": "Chris O'Dowd", "148": "Chris Pine", "149": "Chris Pratt", "150": "Chris Tucker", "151": "Chrissy Teigen", "152": "Christian Bale", "153": "Christian Slater", "154": "Christina Aguilera", "155": "Christina Applegate", "156": "Christina Hendricks", "157": "Christina Milian", "158": "Christina Ricci", "159": "Christine Baranski", "160": "Christoph Waltz", "161": "Christopher Plummer", "162": "Christopher Walken", "163": "Cillian Murphy", "164": "Claire Foy", "165": "Clive Owen", "166": "Clive Standen", "167": "Cobie Smulders", "168": "Colin Farrell", "169": "Colin Firth", "170": "Colin Hanks", "171": "Connie Britton", "172": "Conor McGregor", "173": "Constance Wu", "174": "Constance Zimmer", "175": "Courteney Cox", "176": "Cristiano Ronaldo", "177": "Daisy Ridley", "178": "Dak Prescott", "179": "Dakota Fanning", "180": "Dakota Johnson", "181": "Damian Lewis", "182": "Dan Stevens", "183": "Danai Gurira", "184": "Dane DeHaan", "185": "Daniel Craig", "186": "Daniel Dae Kim", "187": "Daniel Day-Lewis", "188": "Daniel Gillies", "189": "Daniel Kaluuya", "190": "Daniel Mays", "191": "Daniel Radcliffe", "192": "Danny DeVito", "193": "Darren Criss", "194": "Dave Bautista", "195": "Dave Franco", "196": "Dave Grohl", "197": "Daveed Diggs", "198": "David Attenborough", "199": "David Beckham", "200": "David Duchovny", "201": "David Harbour", "202": "David Oyelowo", "203": "David Schwimmer", "204": "David Tennant", "205": "David Thewlis", "206": "Dax Shepard", "207": "Debra Messing", "208": "Demi Lovato", "209": "Dennis Quaid", "210": "Denzel Washington", "211": "Dermot Mulroney", "212": "Dev Patel", "213": "Diane Keaton", "214": "Diane Kruger", "215": "Diane Lane", "216": "Diego Boneta", "217": "Diego Luna", "218": "Djimon Hounsou", "219": "Dolly Parton", "220": "Domhnall Gleeson", "221": "Dominic Cooper", "222": "Dominic Monaghan", "223": "Dominic West", "224": "Don Cheadle", "225": "Donald Glover", "226": "Donald Sutherland", "227": "Donald Trump", "228": "Dua Lipa", "229": "Dwayne \"The Rock\" Johnson", "230": "Dwayne Johnson", "231": "Dylan O'Brien", "232": "Ed Harris", "233": "Ed Helms", "234": "Ed Sheeran", "235": "Eddie Murphy", "236": "Eddie Redmayne", "237": "Edgar Ramirez", "238": "Edward Norton", "239": "Eiza Gonzalez", "240": "Eiza Gonz\u00e1lez", "241": "Elijah Wood", "242": "Elisabeth Moss", "243": "Elisha Cuthbert", "244": "Eliza Coupe", "245": "Elizabeth Banks", "246": "Elizabeth Debicki", "247": "Elizabeth Lail", "248": "Elizabeth McGovern", "249": "Elizabeth Moss", "250": "Elizabeth Olsen", "251": "Elle Fanning", "252": "Ellen DeGeneres", "253": "Ellen Page", "254": "Ellen Pompeo", "255": "Ellie Goulding", "256": "Elon Musk", "257": "Emile Hirsch", "258": "Emilia Clarke", "259": "Emilia Fox", "260": "Emily Beecham", "261": "Emily Blunt", "262": "Emily Browning", "263": "Emily Deschanel", "264": "Emily Hampshire", "265": "Emily Mortimer", "266": "Emily Ratajkowski", "267": "Emily VanCamp", "268": "Emily Watson", "269": "Emma Bunton", "270": "Emma Chamberlain", "271": "Emma Corrin", "272": "Emma Mackey", "273": "Emma Roberts", "274": "Emma Stone", "275": "Emma Thompson", "276": "Emma Watson", "277": "Emmanuelle Chriqui", "278": "Emmy Rossum", "279": "Eoin Macken", "280": "Eric Bana", "281": "Ethan Hawke", "282": "Eva Green", "283": "Eva Longoria", "284": "Eva Mendes", "285": "Evan Peters", "286": "Evan Rachel Wood", "287": "Evangeline Lilly", "288": "Ewan McGregor", "289": "Ezra Miller", "290": "Felicity Huffman", "291": "Felicity Jones", "292": "Finn Wolfhard", "293": "Florence Pugh", "294": "Florence Welch", "295": "Forest Whitaker", "296": "Freddie Highmore", "297": "Freddie Prinze Jr.", "298": "Freema Agyeman", "299": "Freida Pinto", "300": "Freya Allan", "301": "Gabrielle Union", "302": "Gael Garcia Bernal", "303": "Gael Garc\u00eda Bernal", "304": "Gal Gadot", "305": "Garrett Hedlund", "306": "Gary Oldman", "307": "Gemma Arterton", "308": "Gemma Chan", "309": "Gemma Whelan", "310": "George Clooney", "311": "George Lucas", "312": "Gerard Butler", "313": "Giancarlo Esposito", "314": "Giannis Antetokounmpo", "315": "Gigi Hadid", "316": "Gillian Anderson", "317": "Gillian Jacobs", "318": "Gina Carano", "319": "Gina Gershon", "320": "Gina Rodriguez", "321": "Ginnifer Goodwin", "322": "Gisele Bundchen", "323": "Glenn Close", "324": "Grace Kelly", "325": "Greg Kinnear", "326": "Greta Gerwig", "327": "Greta Scacchi", "328": "Greta Thunberg", "329": "Gugu Mbatha-Raw", "330": "Guy Ritchie", "331": "Gwen Stefani", "332": "Gwendoline Christie", "333": "Gwyneth Paltrow", "334": "Hafthor Bjornsson", "335": "Hailee Steinfeld", "336": "Hailey Bieber", "337": "Haley Joel Osment", "338": "Halle Berry", "339": "Hannah Simone", "340": "Harrison Ford", "341": "Harry Styles", "342": "Harvey Weinstein", "343": "Hayden Panettiere", "344": "Hayley Atwell", "345": "Helen Hunt", "346": "Helen Mirren", "347": "Helena Bonham Carter", "348": "Henry Cavill", "349": "Henry Golding", "350": "Hilary Swank", "351": "Himesh Patel", "352": "Hozier", "353": "Hugh Bonneville", "354": "Hugh Dancy", "355": "Hugh Grant", "356": "Hugh Jackman", "357": "Hugh Laurie", "358": "Ian Somerhalder", "359": "Idris Elba", "360": "Imelda Staunton", "361": "Imogen Poots", "362": "Ioan Gruffudd", "363": "Isabella Rossellini", "364": "Isabelle Huppert", "365": "Isla Fisher", "366": "Issa Rae", "367": "Iwan Rheon", "368": "J.K. Rowling", "369": "J.K. Simmons", "370": "Jack Black", "371": "Jack Reynor", "372": "Jack Whitehall", "373": "Jackie Chan", "374": "Jada Pinkett Smith", "375": "Jaden Smith", "376": "Jaimie Alexander", "377": "Jake Gyllenhaal", "378": "Jake Johnson", "379": "Jake T. Austin", "380": "James Cameron", "381": "James Corden", "382": "James Franco", "383": "James Marsden", "384": "James McAvoy", "385": "James Norton", "386": "Jamie Bell", "387": "Jamie Chung", "388": "Jamie Dornan", "389": "Jamie Foxx", "390": "Jamie Lee Curtis", "391": "Jamie Oliver", "392": "Jane Fonda", "393": "Jane Krakowski", "394": "Jane Levy", "395": "Jane Lynch", "396": "Jane Seymour", "397": "Janelle Mon\u00e1e", "398": "January Jones", "399": "Jared Leto", "400": "Jason Bateman", "401": "Jason Clarke", "402": "Jason Derulo", "403": "Jason Isaacs", "404": "Jason Momoa", "405": "Jason Mraz", "406": "Jason Schwartzman", "407": "Jason Segel", "408": "Jason Statham", "409": "Jason Sudeikis", "410": "Javier Bardem", "411": "Jay Baruchel", "412": "Jay-Z", "413": "Jeff Bezos", "414": "Jeff Bridges", "415": "Jeff Daniels", "416": "Jeff Goldblum", "417": "Jeffrey Dean Morgan", "418": "Jeffrey Donovan", "419": "Jeffrey Wright", "420": "Jemima Kirke", "421": "Jenna Coleman", "422": "Jenna Fischer", "423": "Jenna Ortega", "424": "Jennifer Aniston", "425": "Jennifer Connelly", "426": "Jennifer Coolidge", "427": "Jennifer Esposito", "428": "Jennifer Garner", "429": "Jennifer Hudson", "430": "Jennifer Lawrence", "431": "Jennifer Lopez", "432": "Jennifer Love Hewitt", "433": "Jenny Slate", "434": "Jeremy Irons", "435": "Jeremy Renner", "436": "Jeremy Strong", "437": "Jerry Seinfeld", "438": "Jesse Eisenberg", "439": "Jesse Metcalfe", "440": "Jesse Plemons", "441": "Jesse Tyler Ferguson", "442": "Jesse Williams", "443": "Jessica Alba", "444": "Jessica Biel", "445": "Jessica Chastain", "446": "Jessica Lange", "447": "Jessie Buckley", "448": "Jim Carrey", "449": "Jim Parsons", "450": "Joan Collins", "451": "Joan Cusack", "452": "Joanne Froggatt", "453": "Joaquin Phoenix", "454": "Jodie Comer", "455": "Jodie Foster", "456": "Joe Jonas", "457": "Joe Keery", "458": "Joel Edgerton", "459": "Joel Kinnaman", "460": "Joel McHale", "461": "John Boyega", "462": "John C. Reilly", "463": "John Cena", "464": "John Cho", "465": "John Cleese", "466": "John Corbett", "467": "John David Washington", "468": "John Goodman", "469": "John Hawkes", "470": "John Krasinski", "471": "John Legend", "472": "John Leguizamo", "473": "John Lithgow", "474": "John Malkovich", "475": "John Mayer", "476": "John Mulaney", "477": "John Oliver", "478": "John Slattery", "479": "John Travolta", "480": "John Turturro", "481": "Johnny Depp", "482": "Johnny Knoxville", "483": "Jon Bernthal", "484": "Jon Favreau", "485": "Jon Hamm", "486": "Jonah Hill", "487": "Jonathan Groff", "488": "Jonathan Majors", "489": "Jonathan Pryce", "490": "Jonathan Rhys Meyers", "491": "Jordan Peele", "492": "Jordana Brewster", "493": "Joseph Fiennes", "494": "Joseph Gordon-Levitt", "495": "Josh Allen", "496": "Josh Brolin", "497": "Josh Gad", "498": "Josh Hartnett", "499": "Josh Hutcherson", "500": "Josh Radnor", "501": "Jude Law", "502": "Judy Dench", "503": "Judy Greer", "504": "Julia Garner", "505": "Julia Louis-Dreyfus", "506": "Julia Roberts", "507": "Julia Stiles", "508": "Julian Casablancas", "509": "Julian McMahon", "510": "Julianna Margulies", "511": "Julianne Hough", "512": "Julianne Moore", "513": "Julianne Nicholson", "514": "Juliette Binoche", "515": "Juliette Lewis", "516": "Juno Temple", "517": "Jurnee Smollett", "518": "Justin Bartha", "519": "Justin Bieber", "520": "Justin Hartley", "521": "Justin Herbert", "522": "Justin Long", "523": "Justin Theroux", "524": "Justin Timberlake", "525": "KJ Apa", "526": "Kaitlyn Dever", "527": "Kaley Cuoco", "528": "Kanye West", "529": "Karl Urban", "530": "Kat Dennings", "531": "Kate Beckinsale", "532": "Kate Bosworth", "533": "Kate Hudson", "534": "Kate Mara", "535": "Kate Middleton", "536": "Kate Upton", "537": "Kate Walsh", "538": "Kate Winslet", "539": "Katee Sackhoff", "540": "Katherine Heigl", "541": "Katherine Langford", "542": "Katherine Waterston", "543": "Kathryn Hahn", "544": "Katie Holmes", "545": "Katie McGrath", "546": "Katy Perry", "547": "Kaya Scodelario", "548": "Keanu Reeves", "549": "Keegan-Michael Key", "550": "Keira Knightley", "551": "Keke Palmer", "552": "Kelly Clarkson", "553": "Kelly Macdonald", "554": "Kelly Marie Tran", "555": "Kelly Reilly", "556": "Kelly Ripa", "557": "Kelvin Harrison Jr.", "558": "Keri Russell", "559": "Kerry Washington", "560": "Kevin Bacon", "561": "Kevin Costner", "562": "Kevin Hart", "563": "Kevin Spacey", "564": "Ki Hong Lee", "565": "Kiefer Sutherland", "566": "Kieran Culkin", "567": "Kiernan Shipka", "568": "Kim Dickens", "569": "Kim Kardashian", "570": "Kirsten Dunst", "571": "Kit Harington", "572": "Kourtney Kardashian", "573": "Kristen Bell", "574": "Kristen Stewart", "575": "Kristen Wiig", "576": "Kristin Davis", "577": "Krysten Ritter", "578": "Kyle Chandler", "579": "Kylie Jenner", "580": "Kylie Minogue", "581": "Lady Gaga", "582": "Lake Bell", "583": "Lakeith Stanfield", "584": "Lamar Jackson", "585": "Lana Del Rey", "586": "Laura Dern", "587": "Laura Harrier", "588": "Laura Linney", "589": "Laura Prepon", "590": "Laurence Fishburne", "591": "Laverne Cox", "592": "LeBron James", "593": "Lea Michele", "594": "Lea Seydoux", "595": "Lee Pace", "596": "Leighton Meester", "597": "Lena Headey", "598": "Leonardo Da Vinci", "599": "Leonardo DiCaprio", "600": "Leslie Mann", "601": "Leslie Odom Jr.", "602": "Lewis Hamilton", "603": "Liam Hemsworth", "604": "Liam Neeson", "605": "Lili Reinhart", "606": "Lily Aldridge", "607": "Lily Allen", "608": "Lily Collins", "609": "Lily James", "610": "Lily Rabe", "611": "Lily Tomlin", "612": "Lin-Manuel Miranda", "613": "Linda Cardellini", "614": "Lionel Messi", "615": "Lisa Bonet", "616": "Lisa Kudrow", "617": "Liv Tyler", "618": "Lizzo", "619": "Logan Lerman", "620": "Lorde", "621": "Lucy Boynton", "622": "Lucy Hale", "623": "Lucy Lawless", "624": "Lucy Liu", "625": "Luke Evans", "626": "Luke Perry", "627": "Luke Wilson", "628": "Lupita Nyong'o", "629": "L\u00e9a Seydoux", "630": "Mackenzie Davis", "631": "Madelaine Petsch", "632": "Mads Mikkelsen", "633": "Mae Whitman", "634": "Maggie Gyllenhaal", "635": "Maggie Q", "636": "Maggie Siff", "637": "Maggie Smith", "638": "Mahershala Ali", "639": "Mahira Khan", "640": "Maisie Richardson-Sellers", "641": "Maisie Williams", "642": "Mandy Moore", "643": "Mandy Patinkin", "644": "Marc Anthony", "645": "Margaret Qualley", "646": "Margot Robbie", "647": "Maria Sharapova", "648": "Marion Cotillard", "649": "Marisa Tomei", "650": "Mariska Hargitay", "651": "Mark Hamill", "652": "Mark Ruffalo", "653": "Mark Strong", "654": "Mark Wahlberg", "655": "Mark Zuckerberg", "656": "Marlon Brando", "657": "Martin Freeman", "658": "Martin Scorsese", "659": "Mary Elizabeth Winstead", "660": "Mary J. Blige", "661": "Mary Steenburgen", "662": "Mary-Louise Parker", "663": "Matt Bomer", "664": "Matt Damon", "665": "Matt LeBlanc", "666": "Matt Smith", "667": "Matthew Fox", "668": "Matthew Goode", "669": "Matthew Macfadyen", "670": "Matthew McConaughey", "671": "Matthew Perry", "672": "Matthew Rhys", "673": "Matthew Stafford", "674": "Max Minghella", "675": "Maya Angelou", "676": "Maya Hawke", "677": "Maya Rudolph", "678": "Megan Fox", "679": "Megan Rapinoe", "680": "Meghan Markle", "681": "Mel Gibson", "682": "Melanie Lynskey", "683": "Melissa Benoist", "684": "Melissa McCarthy", "685": "Melonie Diaz", "686": "Meryl Streep", "687": "Mia Wasikowska", "688": "Michael B. Jordan", "689": "Michael C. Hall", "690": "Michael Caine", "691": "Michael Cera", "692": "Michael Cudlitz", "693": "Michael Douglas", "694": "Michael Ealy", "695": "Michael Fassbender", "696": "Michael Jordan", "697": "Michael Keaton", "698": "Michael Pena", "699": "Michael Pe\u00f1a", "700": "Michael Phelps", "701": "Michael Shannon", "702": "Michael Sheen", "703": "Michael Stuhlbarg", "704": "Michelle Dockery", "705": "Michelle Monaghan", "706": "Michelle Obama", "707": "Michelle Pfeiffer", "708": "Michelle Rodriguez", "709": "Michelle Williams", "710": "Michelle Yeoh", "711": "Michiel Huisman", "712": "Mila Kunis", "713": "Miles Teller", "714": "Milla Jovovich", "715": "Millie Bobby Brown", "716": "Milo Ventimiglia", "717": "Mindy Kaling", "718": "Miranda Cosgrove", "719": "Miranda Kerr", "720": "Mireille Enos", "721": "Molly Ringwald", "722": "Morgan Freeman", "723": "M\u00e9lanie Laurent", "724": "Naomi Campbell", "725": "Naomi Harris", "726": "Naomi Scott", "727": "Naomi Watts", "728": "Naomie Harris", "729": "Nas", "730": "Natalie Dormer", "731": "Natalie Imbruglia", "732": "Natalie Morales", "733": "Natalie Portman", "734": "Nathalie Emmanuel", "735": "Nathalie Portman", "736": "Nathan Fillion", "737": "Naya Rivera", "738": "Neil Patrick Harris", "739": "Neil deGrasse Tyson", "740": "Neve Campbell", "741": "Neymar Jr.", "742": "Nicholas Braun", "743": "Nicholas Hoult", "744": "Nick Jonas", "745": "Nick Kroll", "746": "Nick Offerman", "747": "Nick Robinson", "748": "Nicole Kidman", "749": "Nikolaj Coster-Waldau", "750": "Nina Dobrev", "751": "Noah Centineo", "752": "Noomi Rapace", "753": "Norman Reedus", "754": "Novak Djokovic", "755": "Octavia Spencer", "756": "Odessa Young", "757": "Odette Annable", "758": "Olivia Colman", "759": "Olivia Cooke", "760": "Olivia Holt", "761": "Olivia Munn", "762": "Olivia Wilde", "763": "Oprah Winfrey", "764": "Orlando Bloom", "765": "Oscar Isaac", "766": "Owen Wilson", "767": "Pablo Picasso", "768": "Patrick Dempsey", "769": "Patrick Mahomes", "770": "Patrick Stewart", "771": "Patrick Wilson", "772": "Paul Bettany", "773": "Paul Dano", "774": "Paul Giamatti", "775": "Paul McCartney", "776": "Paul Rudd", "777": "Paul Wesley", "778": "Paula Patton", "779": "Pedro Almod\u00f3var", "780": "Pedro Pascal", "781": "Penelope Cruz", "782": "Pen\u00e9lope Cruz", "783": "Pete Davidson", "784": "Peter Dinklage", "785": "Phoebe Dynevor", "786": "Phoebe Waller-Bridge", "787": "Pierce Brosnan", "788": "Portia de Rossi", "789": "Priyanka Chopra", "790": "Quentin Tarantino", "791": "Rachel Bilson", "792": "Rachel Brosnahan", "793": "Rachel McAdams", "794": "Rachel Weisz", "795": "Rafe Spall", "796": "Rainn Wilson", "797": "Ralph Fiennes", "798": "Rami Malek", "799": "Rashida Jones", "800": "Ray Liotta", "801": "Ray Romano", "802": "Rebecca Ferguson", "803": "Rebecca Hall", "804": "Reese Witherspoon", "805": "Regina Hall", "806": "Regina King", "807": "Renee Zellweger", "808": "Ren\u00e9e Zellweger", "809": "Rhys Ifans", "810": "Ricardo Montalban", "811": "Richard Armitage", "812": "Richard Gere", "813": "Richard Jenkins", "814": "Richard Madden", "815": "Ricky Gervais", "816": "Ricky Martin", "817": "Rihanna", "818": "Riley Keough", "819": "Rita Ora", "820": "River Phoenix", "821": "Riz Ahmed", "822": "Rob Lowe", "823": "Robert Carlyle", "824": "Robert De Niro", "825": "Robert Downey Jr.", "826": "Robert Pattinson", "827": "Robert Sheehan", "828": "Robin Tunney", "829": "Robin Williams", "830": "Roger Federer", "831": "Rooney Mara", "832": "Rosamund Pike", "833": "Rosario Dawson", "834": "Rose Byrne", "835": "Rose Leslie", "836": "Roselyn Sanchez", "837": "Ruby Rose", "838": "Rupert Grint", "839": "Russell Brand", "840": "Russell Crowe", "841": "Russell Wilson", "842": "Ruth Bader Ginsburg", "843": "Ruth Wilson", "844": "Ryan Eggold", "845": "Ryan Gosling", "846": "Ryan Murphy", "847": "Ryan Phillippe", "848": "Ryan Reynolds", "849": "Ryan Seacrest", "850": "Salma Hayek", "851": "Sam Claflin", "852": "Sam Heughan", "853": "Sam Rockwell", "854": "Sam Smith", "855": "Samara Weaving", "856": "Samuel L. Jackson", "857": "Sandra Bullock", "858": "Sandra Oh", "859": "Saoirse Ronan", "860": "Sarah Gadon", "861": "Sarah Hyland", "862": "Sarah Jessica Parker", "863": "Sarah Michelle Gellar", "864": "Sarah Paulson", "865": "Sarah Silverman", "866": "Sarah Wayne Callies", "867": "Sasha Alexander", "868": "Scarlett Johansson", "869": "Scott Speedman", "870": "Sean Bean", "871": "Sebastian Stan", "872": "Selena Gomez", "873": "Selma Blair", "874": "Serena Williams", "875": "Seth MacFarlane", "876": "Seth Meyers", "877": "Seth Rogen", "878": "Shailene Woodley", "879": "Shakira", "880": "Shania Twain", "881": "Sharlto Copley", "882": "Shawn Mendes", "883": "Shia LaBeouf", "884": "Shiri Appleby", "885": "Shohreh Aghdashloo", "886": "Shonda Rhimes", "887": "Sienna Miller", "888": "Sigourney Weaver", "889": "Simon Baker", "890": "Simon Cowell", "891": "Simon Pegg", "892": "Simone Biles", "893": "Sofia Boutella", "894": "Sofia Vergara", "895": "Sophie Turner", "896": "Sophie Wessex", "897": "Stanley Tucci", "898": "Stephen Amell", "899": "Stephen Colbert", "900": "Stephen Curry", "901": "Stephen Dorff", "902": "Sterling K. Brown", "903": "Sterling Knight", "904": "Steve Carell", "905": "Steven Yeun", "906": "Susan Sarandon", "907": "Taika Waititi", "908": "Taraji P. Henson", "909": "Taron Egerton", "910": "Taylor Hill", "911": "Taylor Kitsch", "912": "Taylor Lautner", "913": "Taylor Schilling", "914": "Taylor Swift", "915": "Teresa Palmer", "916": "Terrence Howard", "917": "Tessa Thompson", "918": "Thandie Newton", "919": "The Weeknd", "920": "Theo James", "921": "Thomas Brodie-Sangster", "922": "Thomas Jane", "923": "Tiger Woods", "924": "Tilda Swinton", "925": "Tim Burton", "926": "Tim Cook", "927": "Timothee Chalamet", "928": "Timothy Olyphant", "929": "Timothy Spall", "930": "Timoth\u00e9e Chalamet", "931": "Tina Fey", "932": "Tobey Maguire", "933": "Toby Jones", "934": "Toby Kebbell", "935": "Toby Regbo", "936": "Tom Brady", "937": "Tom Brokaw", "938": "Tom Cavanagh", "939": "Tom Cruise", "940": "Tom Ellis", "941": "Tom Felton", "942": "Tom Hanks", "943": "Tom Hardy", "944": "Tom Hiddleston", "945": "Tom Holland", "946": "Tom Hollander", "947": "Tom Hopper", "948": "Tom Selleck", "949": "Toni Collette", "950": "Tony Hale", "951": "Topher Grace", "952": "Tracee Ellis Ross", "953": "Tyra Banks", "954": "Tyrese Gibson", "955": "Uma Thurman", "956": "Usain Bolt", "957": "Uzo Aduba", "958": "Vanessa Hudgens", "959": "Vanessa Kirby", "960": "Vera Farmiga", "961": "Victoria Pedretti", "962": "Viggo Mortensen", "963": "Vin Diesel", "964": "Vince Vaughn", "965": "Vincent Cassel", "966": "Vincent D'Onofrio", "967": "Vincent Kartheiser", "968": "Viola Davis", "969": "Walton Goggins", "970": "Wes Anderson", "971": "Wes Bentley", "972": "Whoopi Goldberg", "973": "Will Ferrell", "974": "Will Poulter", "975": "Willem Dafoe", "976": "William Jackson Harper", "977": "William Shatner", "978": "Winona Ryder", "979": "Woody Harrelson", "980": "Yara Shahidi", "981": "Yvonne Strahovski", "982": "Zac Efron", "983": "Zach Braff", "984": "Zach Galifianakis", "985": "Zachary Levi", "986": "Zachary Quinto", "987": "Zayn Malik", "988": "Zazie Beetz", "989": "Zendaya", "990": "Zoe Kazan", "991": "Zoe Kravitz", "992": "Zoe Saldana", "993": "Zoey Deutch", "994": "Zooey Deschanel", "995": "Zo\u00eb Kravitz", "996": "Zo\u00eb Saldana"}}}}], "splits": [{"name": "train", "num_bytes": 193671657.464, "num_examples": 18184}], "download_size": 190510261, "dataset_size": 193671657.464}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T08:20:41+00:00 | [] | [] | TAGS
#region-us
| # Celebrity 1000
Top 1000 celebrities. 18,184 images. 256x256. Square cropped to face. | [
"# Celebrity 1000\n\nTop 1000 celebrities. 18,184 images. 256x256. Square cropped to face."
] | [
"TAGS\n#region-us \n",
"# Celebrity 1000\n\nTop 1000 celebrities. 18,184 images. 256x256. Square cropped to face."
] |
76b6303308cc052de373ed17c05b8120efbd1abb | # Dataset Card for "6M_Alyaum"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MosenA/6M_Alyaum | [
"region:us"
] | 2024-02-01T08:23:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1105641612, "num_examples": 647333}], "download_size": 489114658, "dataset_size": 1105641612}} | 2024-02-01T08:23:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "6M_Alyaum"
More Information needed | [
"# Dataset Card for \"6M_Alyaum\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"6M_Alyaum\"\n\nMore Information needed"
] |
f6750efb75ec714748f8d97495bde2655f4a1366 |
# RRSIS: Referring Remote Sensing Image Segmentation
The RefSegRS dataset that is used in [RRSIS: Referring Remote Sensing Image Segmentation](https://arxiv.org/abs/2306.08625).
Please kindly cite our paper if you find our dataset useful.
~~~
@article{yuan2023rrsis,
title={RRSIS: Referring Remote Sensing Image Segmentation},
author={Yuan, Zhenghang and Mou, Lichao and Hua, Yuansheng and Zhu, Xiao Xiang},
journal={arXiv preprint arXiv:2306.08625},
year={2023}
}
~~~
| JessicaYuan/RefSegRS | [
"license:cc-by-4.0",
"arxiv:2306.08625",
"region:us"
] | 2024-02-01T08:28:36+00:00 | {"license": "cc-by-4.0"} | 2024-02-01T09:47:36+00:00 | [
"2306.08625"
] | [] | TAGS
#license-cc-by-4.0 #arxiv-2306.08625 #region-us
|
# RRSIS: Referring Remote Sensing Image Segmentation
The RefSegRS dataset that is used in RRSIS: Referring Remote Sensing Image Segmentation.
Please kindly cite our paper if you find our dataset useful.
~~~
@article{yuan2023rrsis,
title={RRSIS: Referring Remote Sensing Image Segmentation},
author={Yuan, Zhenghang and Mou, Lichao and Hua, Yuansheng and Zhu, Xiao Xiang},
journal={arXiv preprint arXiv:2306.08625},
year={2023}
}
~~~
| [
"# RRSIS: Referring Remote Sensing Image Segmentation\n\n\nThe RefSegRS dataset that is used in RRSIS: Referring Remote Sensing Image Segmentation.\n\n\nPlease kindly cite our paper if you find our dataset useful.\n\n~~~\n@article{yuan2023rrsis,\n title={RRSIS: Referring Remote Sensing Image Segmentation},\n author={Yuan, Zhenghang and Mou, Lichao and Hua, Yuansheng and Zhu, Xiao Xiang},\n journal={arXiv preprint arXiv:2306.08625},\n year={2023}\n}\n~~~"
] | [
"TAGS\n#license-cc-by-4.0 #arxiv-2306.08625 #region-us \n",
"# RRSIS: Referring Remote Sensing Image Segmentation\n\n\nThe RefSegRS dataset that is used in RRSIS: Referring Remote Sensing Image Segmentation.\n\n\nPlease kindly cite our paper if you find our dataset useful.\n\n~~~\n@article{yuan2023rrsis,\n title={RRSIS: Referring Remote Sensing Image Segmentation},\n author={Yuan, Zhenghang and Mou, Lichao and Hua, Yuansheng and Zhu, Xiao Xiang},\n journal={arXiv preprint arXiv:2306.08625},\n year={2023}\n}\n~~~"
] |
cad8a6131135b40b1e3d925774c09eebac400c46 | # Dataset Card for "14_24_Alriyadh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MosenA/14_24_Alriyadh | [
"region:us"
] | 2024-02-01T08:34:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2165621024, "num_examples": 680084}], "download_size": 1013255278, "dataset_size": 2165621024}} | 2024-02-01T08:35:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "14_24_Alriyadh"
More Information needed | [
"# Dataset Card for \"14_24_Alriyadh\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"14_24_Alriyadh\"\n\nMore Information needed"
] |
66f87513d119f3fa5ea5e5f23fc5d2a092f5d323 | # SentiMP-Sp Dataset
The SentiMP-Sp Dataset is a spanish sentiment analysis dataset based on tweets written by members of parliament in Spain in 2021. It has been developed collaboratively by the [Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI)](https://dasci.es/) research group from the [University of Granada](https://www.ugr.es/), the [SINAI](https://sinai.ujaen.es/) research group from the [University of Jaén](https://www.ujaen.es/) and the [Cardiff NLP](https://sites.google.com/view/cardiffnlp/) research group from the [University of Cardiff](https://isc.cardiff.ac.uk/).
<div align="center", style="text-align:center; display:block">
<img style="float:left; padding-right:10px" src="https://dasci.es/wp-content/uploads/2018/12/DaSCI_logo_vertical.png" alt="DaSCI" width="150"/>
<img style="float:left; padding-right:10px" src="https://www.ujaen.es/gobierno/viccom/sites/gobierno_viccom/files/uploads/inline-images/Marca%20Tradicional.png" alt="UJAEN" width="175"/>
<img style="float:left;" src="https://upload.wikimedia.org/wikipedia/commons/e/ef/Cardiff_University_%28logo%29.svg" alt="Cardiff" width="125"/>
</div>
<div style="clear:both"></div>
## Dataset details
The dataset containst 500 tweets in Spanish. For each tweet we provide the following information:
* **full_text**: Which containts the content of the tweet.
* **fold**: Proposed partitions \{0,1,2,3,4\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.
* **label_i** : Annotator's i label (i in \{1,2,3,4,5\}). It takes values in \{-1,0,1\}.
* **majority_vote**: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label "TIE". It takes values in \{-1,0,1,TIE\}.
* **tie_break**: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \{-1,0,1\}.
* **gold_label**: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \{-1,0,1\}.
## Citation
If you use this dataset, please cite:
## Contact
Nuria Rodríguez Barroso - [email protected]
## Acknowledgements
This work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government.
Shield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]
This work is licensed under a
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]
[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/
[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png
[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg | rbnuria/SentiMP-Sp | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:es",
"license:cc-by-sa-4.0",
"code",
"region:us"
] | 2024-02-01T08:47:32+00:00 | {"language": ["es"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "tags": ["code"]} | 2024-02-01T08:57:17+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #size_categories-n<1K #language-Spanish #license-cc-by-sa-4.0 #code #region-us
| # SentiMP-Sp Dataset
The SentiMP-Sp Dataset is a spanish sentiment analysis dataset based on tweets written by members of parliament in Spain in 2021. It has been developed collaboratively by the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) research group from the University of Granada, the SINAI research group from the University of Jaén and the Cardiff NLP research group from the University of Cardiff.
<div align="center", style="text-align:center; display:block">
<img style="float:left; padding-right:10px" src="URL alt="DaSCI" width="150"/>
<img style="float:left; padding-right:10px" src="URL alt="UJAEN" width="175"/>
<img style="float:left;" src="URL alt="Cardiff" width="125"/>
</div>
<div style="clear:both"></div>
## Dataset details
The dataset containst 500 tweets in Spanish. For each tweet we provide the following information:
* full_text: Which containts the content of the tweet.
* fold: Proposed partitions \{0,1,2,3,4\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.
* label_i : Annotator's i label (i in \{1,2,3,4,5\}). It takes values in \{-1,0,1\}.
* majority_vote: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label "TIE". It takes values in \{-1,0,1,TIE\}.
* tie_break: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \{-1,0,1\}.
* gold_label: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \{-1,0,1\}.
If you use this dataset, please cite:
## Contact
Nuria Rodríguez Barroso - rbnuria@URL
## Acknowledgements
This work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government.
Shield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]
This work is licensed under a
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]
[cc-by-sa]: URL
[cc-by-sa-image]: URL
[cc-by-sa-shield]: URL | [
"# SentiMP-Sp Dataset\n\nThe SentiMP-Sp Dataset is a spanish sentiment analysis dataset based on tweets written by members of parliament in Spain in 2021. It has been developed collaboratively by the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) research group from the University of Granada, the SINAI research group from the University of Jaén and the Cardiff NLP research group from the University of Cardiff.\n\n<div align=\"center\", style=\"text-align:center; display:block\">\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"DaSCI\" width=\"150\"/>\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"UJAEN\" width=\"175\"/>\n<img style=\"float:left;\" src=\"URL alt=\"Cardiff\" width=\"125\"/>\n</div>\n<div style=\"clear:both\"></div>",
"## Dataset details\n\nThe dataset containst 500 tweets in Spanish. For each tweet we provide the following information:\n* full_text: Which containts the content of the tweet.\n* fold: Proposed partitions \\{0,1,2,3,4\\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.\n* label_i : Annotator's i label (i in \\{1,2,3,4,5\\}). It takes values in \\{-1,0,1\\}.\n* majority_vote: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label \"TIE\". It takes values in \\{-1,0,1,TIE\\}.\n* tie_break: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \\{-1,0,1\\}.\n* gold_label: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \\{-1,0,1\\}.\n\n\nIf you use this dataset, please cite:",
"## Contact\nNuria Rodríguez Barroso - rbnuria@URL",
"## Acknowledgements\n\nThis work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government. \n\nShield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]\n\nThis work is licensed under a\n[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].\n\n[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]\n\n[cc-by-sa]: URL\n[cc-by-sa-image]: URL\n[cc-by-sa-shield]: URL"
] | [
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Spanish #license-cc-by-sa-4.0 #code #region-us \n",
"# SentiMP-Sp Dataset\n\nThe SentiMP-Sp Dataset is a spanish sentiment analysis dataset based on tweets written by members of parliament in Spain in 2021. It has been developed collaboratively by the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) research group from the University of Granada, the SINAI research group from the University of Jaén and the Cardiff NLP research group from the University of Cardiff.\n\n<div align=\"center\", style=\"text-align:center; display:block\">\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"DaSCI\" width=\"150\"/>\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"UJAEN\" width=\"175\"/>\n<img style=\"float:left;\" src=\"URL alt=\"Cardiff\" width=\"125\"/>\n</div>\n<div style=\"clear:both\"></div>",
"## Dataset details\n\nThe dataset containst 500 tweets in Spanish. For each tweet we provide the following information:\n* full_text: Which containts the content of the tweet.\n* fold: Proposed partitions \\{0,1,2,3,4\\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.\n* label_i : Annotator's i label (i in \\{1,2,3,4,5\\}). It takes values in \\{-1,0,1\\}.\n* majority_vote: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label \"TIE\". It takes values in \\{-1,0,1,TIE\\}.\n* tie_break: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \\{-1,0,1\\}.\n* gold_label: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \\{-1,0,1\\}.\n\n\nIf you use this dataset, please cite:",
"## Contact\nNuria Rodríguez Barroso - rbnuria@URL",
"## Acknowledgements\n\nThis work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government. \n\nShield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]\n\nThis work is licensed under a\n[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].\n\n[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]\n\n[cc-by-sa]: URL\n[cc-by-sa-image]: URL\n[cc-by-sa-shield]: URL"
] |
6f7cd2c8864beab8c6399cf308a591d4d38c87e3 |
# Vietnamese Multiple Choice Medical Dataset
## Overview
The Vietnamese Multiple Choice Medical Dataset is a collection of multiple-choice medical questions in Vietnamese language. This dataset was crawled from the Vietjack website, which is a popular online resource for educational materials in Vietnam. The dataset is intended to serve as a benchmark or an evaluation dataset for assessing the performance of Language Model (LLM) models in understanding medical knowledge
## Contents
The dataset is provided in a structured format: JSON (JavaScript Object Notation) format. Each entry in the dataset represents a single multiple-choice question along with its answer choices and correct answer as the following.
- ID of the question
- Multiple-choice medical questions in Vietnamese language
- Corresponding answer choices
- Correct answers
| dohuyen/9k-questions | [
"language:vi",
"medical",
"region:us"
] | 2024-02-01T08:50:20+00:00 | {"language": ["vi"], "tags": ["medical"]} | 2024-02-01T09:15:36+00:00 | [] | [
"vi"
] | TAGS
#language-Vietnamese #medical #region-us
|
# Vietnamese Multiple Choice Medical Dataset
## Overview
The Vietnamese Multiple Choice Medical Dataset is a collection of multiple-choice medical questions in Vietnamese language. This dataset was crawled from the Vietjack website, which is a popular online resource for educational materials in Vietnam. The dataset is intended to serve as a benchmark or an evaluation dataset for assessing the performance of Language Model (LLM) models in understanding medical knowledge
## Contents
The dataset is provided in a structured format: JSON (JavaScript Object Notation) format. Each entry in the dataset represents a single multiple-choice question along with its answer choices and correct answer as the following.
- ID of the question
- Multiple-choice medical questions in Vietnamese language
- Corresponding answer choices
- Correct answers
| [
"# Vietnamese Multiple Choice Medical Dataset",
"## Overview\nThe Vietnamese Multiple Choice Medical Dataset is a collection of multiple-choice medical questions in Vietnamese language. This dataset was crawled from the Vietjack website, which is a popular online resource for educational materials in Vietnam. The dataset is intended to serve as a benchmark or an evaluation dataset for assessing the performance of Language Model (LLM) models in understanding medical knowledge",
"## Contents\nThe dataset is provided in a structured format: JSON (JavaScript Object Notation) format. Each entry in the dataset represents a single multiple-choice question along with its answer choices and correct answer as the following.\n- ID of the question\n- Multiple-choice medical questions in Vietnamese language\n- Corresponding answer choices\n- Correct answers"
] | [
"TAGS\n#language-Vietnamese #medical #region-us \n",
"# Vietnamese Multiple Choice Medical Dataset",
"## Overview\nThe Vietnamese Multiple Choice Medical Dataset is a collection of multiple-choice medical questions in Vietnamese language. This dataset was crawled from the Vietjack website, which is a popular online resource for educational materials in Vietnam. The dataset is intended to serve as a benchmark or an evaluation dataset for assessing the performance of Language Model (LLM) models in understanding medical knowledge",
"## Contents\nThe dataset is provided in a structured format: JSON (JavaScript Object Notation) format. Each entry in the dataset represents a single multiple-choice question along with its answer choices and correct answer as the following.\n- ID of the question\n- Multiple-choice medical questions in Vietnamese language\n- Corresponding answer choices\n- Correct answers"
] |
efc18f75ec815609d5053335795873142f28b296 | # SentiMP-Gr Dataset
The SentiMP-Gr Dataset is a greek sentiment analysis dataset based on tweets written by members of parliament in United Kingdom in 2021. It has been developed collaboratively by the [Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI)](https://dasci.es/) research group from the [University of Granada](https://www.ugr.es/), the [SINAI](https://sinai.ujaen.es/) research group from the [University of Jaén](https://www.ujaen.es/) and the [Cardiff NLP](https://sites.google.com/view/cardiffnlp/) research group from the [University of Cardiff](https://isc.cardiff.ac.uk/).
<div align="center", style="text-align:center; display:block">
<img style="float:left; padding-right:10px" src="https://dasci.es/wp-content/uploads/2018/12/DaSCI_logo_vertical.png" alt="DaSCI" width="150"/>
<img style="float:left; padding-right:10px" src="https://www.ujaen.es/gobierno/viccom/sites/gobierno_viccom/files/uploads/inline-images/Marca%20Tradicional.png" alt="UJAEN" width="175"/>
<img style="float:left;" src="https://upload.wikimedia.org/wikipedia/commons/e/ef/Cardiff_University_%28logo%29.svg" alt="Cardiff" width="125"/>
</div>
<div style="clear:both"></div>
## Dataset details
The dataset containst 500 tweets in Greek. For each tweet we provide the following information:
* **full_text**: Which containts the content of the tweet.
* **fold**: Proposed partitions \{0,1,2,3,4\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.
* **label_i** : Annotator's i label (i in \{1,2,3\}). It takes values in \{-1,0,1\}.
* **majority_vote**: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label "TIE". It takes values in \{-1,0,1,TIE\}.
* **tie_break**: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \{-1,0,1\}.
* **gold_label**: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \{-1,0,1\}.
## Citation
If you use this dataset, please cite:
## Contact
Nuria Rodríguez Barroso - [email protected]
## Acknowledgements
This work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government.
Shield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]
This work is licensed under a
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]
[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/
[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png
[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg | rbnuria/SentiMP-Gr | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:el",
"license:cc-by-sa-4.0",
"code",
"region:us"
] | 2024-02-01T08:52:30+00:00 | {"language": ["el"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "tags": ["code"]} | 2024-02-01T08:56:20+00:00 | [] | [
"el"
] | TAGS
#task_categories-text-classification #size_categories-n<1K #language-Modern Greek (1453-) #license-cc-by-sa-4.0 #code #region-us
| # SentiMP-Gr Dataset
The SentiMP-Gr Dataset is a greek sentiment analysis dataset based on tweets written by members of parliament in United Kingdom in 2021. It has been developed collaboratively by the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) research group from the University of Granada, the SINAI research group from the University of Jaén and the Cardiff NLP research group from the University of Cardiff.
<div align="center", style="text-align:center; display:block">
<img style="float:left; padding-right:10px" src="URL alt="DaSCI" width="150"/>
<img style="float:left; padding-right:10px" src="URL alt="UJAEN" width="175"/>
<img style="float:left;" src="URL alt="Cardiff" width="125"/>
</div>
<div style="clear:both"></div>
## Dataset details
The dataset containst 500 tweets in Greek. For each tweet we provide the following information:
* full_text: Which containts the content of the tweet.
* fold: Proposed partitions \{0,1,2,3,4\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.
* label_i : Annotator's i label (i in \{1,2,3\}). It takes values in \{-1,0,1\}.
* majority_vote: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label "TIE". It takes values in \{-1,0,1,TIE\}.
* tie_break: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \{-1,0,1\}.
* gold_label: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \{-1,0,1\}.
If you use this dataset, please cite:
## Contact
Nuria Rodríguez Barroso - rbnuria@URL
## Acknowledgements
This work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government.
Shield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]
This work is licensed under a
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]
[cc-by-sa]: URL
[cc-by-sa-image]: URL
[cc-by-sa-shield]: URL | [
"# SentiMP-Gr Dataset\n\nThe SentiMP-Gr Dataset is a greek sentiment analysis dataset based on tweets written by members of parliament in United Kingdom in 2021. It has been developed collaboratively by the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) research group from the University of Granada, the SINAI research group from the University of Jaén and the Cardiff NLP research group from the University of Cardiff.\n\n<div align=\"center\", style=\"text-align:center; display:block\">\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"DaSCI\" width=\"150\"/>\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"UJAEN\" width=\"175\"/>\n<img style=\"float:left;\" src=\"URL alt=\"Cardiff\" width=\"125\"/>\n</div>\n<div style=\"clear:both\"></div>",
"## Dataset details\n\nThe dataset containst 500 tweets in Greek. For each tweet we provide the following information:\n* full_text: Which containts the content of the tweet.\n* fold: Proposed partitions \\{0,1,2,3,4\\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.\n* label_i : Annotator's i label (i in \\{1,2,3\\}). It takes values in \\{-1,0,1\\}.\n* majority_vote: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label \"TIE\". It takes values in \\{-1,0,1,TIE\\}.\n* tie_break: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \\{-1,0,1\\}.\n* gold_label: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \\{-1,0,1\\}.\n\n\nIf you use this dataset, please cite:",
"## Contact\nNuria Rodríguez Barroso - rbnuria@URL",
"## Acknowledgements\n\nThis work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government. \n\nShield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]\n\nThis work is licensed under a\n[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].\n\n[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]\n\n[cc-by-sa]: URL\n[cc-by-sa-image]: URL\n[cc-by-sa-shield]: URL"
] | [
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Modern Greek (1453-) #license-cc-by-sa-4.0 #code #region-us \n",
"# SentiMP-Gr Dataset\n\nThe SentiMP-Gr Dataset is a greek sentiment analysis dataset based on tweets written by members of parliament in United Kingdom in 2021. It has been developed collaboratively by the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) research group from the University of Granada, the SINAI research group from the University of Jaén and the Cardiff NLP research group from the University of Cardiff.\n\n<div align=\"center\", style=\"text-align:center; display:block\">\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"DaSCI\" width=\"150\"/>\n<img style=\"float:left; padding-right:10px\" src=\"URL alt=\"UJAEN\" width=\"175\"/>\n<img style=\"float:left;\" src=\"URL alt=\"Cardiff\" width=\"125\"/>\n</div>\n<div style=\"clear:both\"></div>",
"## Dataset details\n\nThe dataset containst 500 tweets in Greek. For each tweet we provide the following information:\n* full_text: Which containts the content of the tweet.\n* fold: Proposed partitions \\{0,1,2,3,4\\} in 5 folds for 5 fold cross-validation for the sake of reproducibility.\n* label_i : Annotator's i label (i in \\{1,2,3\\}). It takes values in \\{-1,0,1\\}.\n* majority_vote: The result after applying the majority vote strategy to the annotators' partial labelling. When there is a tie we use the label \"TIE\". It takes values in \\{-1,0,1,TIE\\}.\n* tie_break: We use this column to break ties in cases where there is a tie. Therefore, it is only completed when TIE appears in the *majority_vote* column. It takes values in \\{-1,0,1\\}.\n* gold_label: It represents the final label. It is a combination between the *majority_vote* abd the *tie_break* columns. It takes values in \\{-1,0,1\\}.\n\n\nIf you use this dataset, please cite:",
"## Contact\nNuria Rodríguez Barroso - rbnuria@URL",
"## Acknowledgements\n\nThis work was partly supported by the grants PID2020-119478GB-I00, PID2020-116118GA-I00 and TED2021-130145B-I00 funded by MCIN/AEI/10.13039/501100011033 of the Spanish Government. \n\nShield: [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa]\n\nThis work is licensed under a\n[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].\n\n[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]\n\n[cc-by-sa]: URL\n[cc-by-sa-image]: URL\n[cc-by-sa-shield]: URL"
] |
903a5b818e4fb8bff6575b9a79de2602d66b3af9 | # Dataset Card for "processed_truthy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dkshjn/processed_truthy | [
"region:us"
] | 2024-02-01T09:16:03+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "formatted_chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "formatted_rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3097676, "num_examples": 1016}], "download_size": 1360242, "dataset_size": 3097676}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T09:16:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_truthy"
More Information needed | [
"# Dataset Card for \"processed_truthy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_truthy\"\n\nMore Information needed"
] |
225ad538c3897c7188d2956976e98ecb1eb50b1d | # Dataset Card for "CoNaLa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanaeai/CoNaLa | [
"region:us"
] | 2024-02-01T09:34:42+00:00 | {"dataset_info": {"features": [{"name": "intent", "dtype": "string"}, {"name": "rewritten_intent", "dtype": "string"}, {"name": "snippet", "dtype": "string"}, {"name": "question_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 475799, "num_examples": 2879}], "download_size": 259975, "dataset_size": 475799}} | 2024-02-01T09:34:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "CoNaLa"
More Information needed | [
"# Dataset Card for \"CoNaLa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"CoNaLa\"\n\nMore Information needed"
] |
5f47730ff41d618fd8cbde0e8d3dbefb86938153 | # Dataset Card for "lmind_hotpot_train300_eval100_v1_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train300_eval100_v1_qa | [
"region:us"
] | 2024-02-01T09:41:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 51441, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 312070, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 16148, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 104950, "num_examples": 100}, {"name": "all_docs", "num_bytes": 361191, "num_examples": 797}, {"name": "all_docs_eval", "num_bytes": 361140, "num_examples": 797}, {"name": "train", "num_bytes": 51441, "num_examples": 300}, {"name": "validation", "num_bytes": 16148, "num_examples": 100}], "download_size": 596980, "dataset_size": 1274529}} | 2024-02-01T10:26:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train300_eval100_v1_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_qa\"\n\nMore Information needed"
] |
b4c01c7a0c227525753571378b986a2f8d16277b | # Dataset Card for "lmind_hotpot_train300_eval100_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train300_eval100_v1_doc | [
"region:us"
] | 2024-02-01T09:42:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 51441, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 312070, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 16148, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 104950, "num_examples": 100}, {"name": "all_docs", "num_bytes": 361191, "num_examples": 797}, {"name": "all_docs_eval", "num_bytes": 361140, "num_examples": 797}, {"name": "train", "num_bytes": 361191, "num_examples": 797}, {"name": "validation", "num_bytes": 361191, "num_examples": 797}], "download_size": 546922, "dataset_size": 1929322}} | 2024-02-01T10:26:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train300_eval100_v1_doc"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_doc\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_doc\"\n\nMore Information needed"
] |
180c81a37eef32f90b2954a23492e40ff74d6e45 | # Dataset Card for "lmind_hotpot_train300_eval100_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train300_eval100_v1_doc_qa | [
"region:us"
] | 2024-02-01T09:42:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 51441, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 312070, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 16148, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 104950, "num_examples": 100}, {"name": "all_docs", "num_bytes": 361191, "num_examples": 797}, {"name": "all_docs_eval", "num_bytes": 361140, "num_examples": 797}, {"name": "train", "num_bytes": 412632, "num_examples": 1097}, {"name": "validation", "num_bytes": 16148, "num_examples": 100}], "download_size": 813503, "dataset_size": 1635720}} | 2024-02-01T10:27:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train300_eval100_v1_doc_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_doc_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_doc_qa\"\n\nMore Information needed"
] |
4a28a241f30cae4c474d854fd34615f1f10bebd2 | # Dataset Card for "lmind_hotpot_train300_eval100_v1_recite_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train300_eval100_v1_recite_qa | [
"region:us"
] | 2024-02-01T09:42:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 51441, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 312070, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 16148, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 104950, "num_examples": 100}, {"name": "all_docs", "num_bytes": 361191, "num_examples": 797}, {"name": "all_docs_eval", "num_bytes": 361140, "num_examples": 797}, {"name": "train", "num_bytes": 673261, "num_examples": 1097}, {"name": "validation", "num_bytes": 104950, "num_examples": 100}], "download_size": 1032509, "dataset_size": 1985151}} | 2024-02-01T10:28:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train300_eval100_v1_recite_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_recite_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_recite_qa\"\n\nMore Information needed"
] |
98193e9adc7af7a71c770d8f1a9b2ec74f408f5a | # Dataset Card for "lmind_hotpot_train300_eval100_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train300_eval100_v1_reciteonly_qa | [
"region:us"
] | 2024-02-01T09:42:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 51441, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 312070, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 16148, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 104950, "num_examples": 100}, {"name": "all_docs", "num_bytes": 361191, "num_examples": 797}, {"name": "all_docs_eval", "num_bytes": 361140, "num_examples": 797}, {"name": "train", "num_bytes": 312070, "num_examples": 300}, {"name": "validation", "num_bytes": 104950, "num_examples": 100}], "download_size": 817849, "dataset_size": 1623960}} | 2024-02-01T10:29:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train300_eval100_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_reciteonly_qa\"\n\nMore Information needed"
] |
276885fbb1aa1bd98e41b2e73ece8106fdb0bf02 |
<div align="center">
<img width=100% alt="edouard-rolland/volcanic-plumes" src="https://huggingface.co/datasets/edouard-rolland/volcanic-plumes/resolve/main/thumbnail.png">
</div>
# Dataset Description
The dataset presents labelled pictures of plumes and of the Fuego Summit in Guatemala. The data was collected by the University of Bristol Flight Lab in Guatemala from March 22 to April 3, 2019. The drone used for this purpose was a Skywalker X8, equipped with a Pixhawk onboard computer running ArduPlane 3.7.1 and a Raspberry Pi 3B+ for mission management and communication with the ground station. The drone was also equipped of a GoPro Hero 9.
# Citation
```
@inproceedings{rolland2024volcanic,
author = {Edouard G. A. Rolland and Kasper A. R. Grøntved and Anders Lyhne Christensen and Matthew Watson and Tom Richardson},
title = { Autonomous {UAV} Volcanic Plume Sampling Based on Machine Vision and Path Planning},
year = { 2024 },
note = {Under review},
}
```
# Acknowledgement
This work is supported by the WildDrone MSCA Doctoral Network funded by EU Horizon Europe under grant agreement no. 101071224, the Innovation Fund Denmark for the project DIREC (9142-00001B), and by the Engineering & Physical Sciences Research Council (UK) through the CASCADE (Complex Autonomous aircraft Systems Configuration, Analysis and Design Exploratory) programme grant (EP/R009953/1).
# Dataset Labels
```
['plume', 'summit']
```
# Example of Labelled Images
<div align="center">
<<img width=50% alt="edouard-rolland/volcanic-plumes" src="https://huggingface.co/datasets/edouard-rolland/volcanic-plumes/resolve/main/val_batch0_labels.jpg">
</div>
# Number of Images
```json
{'valid': 294, 'test': 456, 'train': 1211}
```
# Example of Application
The dataset was used to train a YOLOv8 neural network. More details can be found in the paper mentioned in the citation section. The following <a href="https://www.youtube.com/watch?v=pSGYUPancfA">video</a> presents the model output for an entire flight.
# How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("edouard-rolland/volcanic-plumes", name="full")
example = ds['train'][0]
```
# License
MIT | edouard-rolland/volcanic-plumes | [
"task_categories:object-detection",
"language:en",
"license:mit",
"roboflow",
"roboflow2huggingface",
"Volcanoes",
"Plumes",
"UAVs",
"Drone",
"doi:10.57967/hf/1728",
"region:us"
] | 2024-02-01T10:15:01+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface", "Volcanoes", "Plumes", "UAVs", "Drone"], "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int32"}, {"name": "height", "dtype": "int32"}, {"name": "objects", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "float32", "length": 4}, {"name": "category", "dtype": {"class_label": {"names": {"0": "plume", "1": "summit"}}}}]}], "splits": [{"name": "train", "num_bytes": 29846342.127, "num_examples": 1211}, {"name": "validation", "num_bytes": 7311174, "num_examples": 294}, {"name": "test", "num_bytes": 12048406, "num_examples": 456}], "download_size": 49324639, "dataset_size": 49205922.127000004}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-02-05T08:40:40+00:00 | [] | [
"en"
] | TAGS
#task_categories-object-detection #language-English #license-mit #roboflow #roboflow2huggingface #Volcanoes #Plumes #UAVs #Drone #doi-10.57967/hf/1728 #region-us
|
<div align="center">
<img width=100% alt="edouard-rolland/volcanic-plumes" src="URL
</div>
# Dataset Description
The dataset presents labelled pictures of plumes and of the Fuego Summit in Guatemala. The data was collected by the University of Bristol Flight Lab in Guatemala from March 22 to April 3, 2019. The drone used for this purpose was a Skywalker X8, equipped with a Pixhawk onboard computer running ArduPlane 3.7.1 and a Raspberry Pi 3B+ for mission management and communication with the ground station. The drone was also equipped of a GoPro Hero 9.
# Acknowledgement
This work is supported by the WildDrone MSCA Doctoral Network funded by EU Horizon Europe under grant agreement no. 101071224, the Innovation Fund Denmark for the project DIREC (9142-00001B), and by the Engineering & Physical Sciences Research Council (UK) through the CASCADE (Complex Autonomous aircraft Systems Configuration, Analysis and Design Exploratory) programme grant (EP/R009953/1).
# Dataset Labels
# Example of Labelled Images
<div align="center">
<<img width=50% alt="edouard-rolland/volcanic-plumes" src="URL
</div>
# Number of Images
# Example of Application
The dataset was used to train a YOLOv8 neural network. More details can be found in the paper mentioned in the citation section. The following <a href="URL presents the model output for an entire flight.
# How to Use
- Install datasets:
- Load the dataset:
# License
MIT | [
"# Dataset Description\n\nThe dataset presents labelled pictures of plumes and of the Fuego Summit in Guatemala. The data was collected by the University of Bristol Flight Lab in Guatemala from March 22 to April 3, 2019. The drone used for this purpose was a Skywalker X8, equipped with a Pixhawk onboard computer running ArduPlane 3.7.1 and a Raspberry Pi 3B+ for mission management and communication with the ground station. The drone was also equipped of a GoPro Hero 9.",
"# Acknowledgement\n\nThis work is supported by the WildDrone MSCA Doctoral Network funded by EU Horizon Europe under grant agreement no. 101071224, the Innovation Fund Denmark for the project DIREC (9142-00001B), and by the Engineering & Physical Sciences Research Council (UK) through the CASCADE (Complex Autonomous aircraft Systems Configuration, Analysis and Design Exploratory) programme grant (EP/R009953/1).",
"# Dataset Labels",
"# Example of Labelled Images\n\n<div align=\"center\">\n <<img width=50% alt=\"edouard-rolland/volcanic-plumes\" src=\"URL\n</div>",
"# Number of Images",
"# Example of Application\n\nThe dataset was used to train a YOLOv8 neural network. More details can be found in the paper mentioned in the citation section. The following <a href=\"URL presents the model output for an entire flight.",
"# How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"# License\nMIT"
] | [
"TAGS\n#task_categories-object-detection #language-English #license-mit #roboflow #roboflow2huggingface #Volcanoes #Plumes #UAVs #Drone #doi-10.57967/hf/1728 #region-us \n",
"# Dataset Description\n\nThe dataset presents labelled pictures of plumes and of the Fuego Summit in Guatemala. The data was collected by the University of Bristol Flight Lab in Guatemala from March 22 to April 3, 2019. The drone used for this purpose was a Skywalker X8, equipped with a Pixhawk onboard computer running ArduPlane 3.7.1 and a Raspberry Pi 3B+ for mission management and communication with the ground station. The drone was also equipped of a GoPro Hero 9.",
"# Acknowledgement\n\nThis work is supported by the WildDrone MSCA Doctoral Network funded by EU Horizon Europe under grant agreement no. 101071224, the Innovation Fund Denmark for the project DIREC (9142-00001B), and by the Engineering & Physical Sciences Research Council (UK) through the CASCADE (Complex Autonomous aircraft Systems Configuration, Analysis and Design Exploratory) programme grant (EP/R009953/1).",
"# Dataset Labels",
"# Example of Labelled Images\n\n<div align=\"center\">\n <<img width=50% alt=\"edouard-rolland/volcanic-plumes\" src=\"URL\n</div>",
"# Number of Images",
"# Example of Application\n\nThe dataset was used to train a YOLOv8 neural network. More details can be found in the paper mentioned in the citation section. The following <a href=\"URL presents the model output for an entire flight.",
"# How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"# License\nMIT"
] |
f6c1e183dca61494abae1a83ae1f95576260053d |
# wolt-food-clip-ViT-B-32-embeddings
Qdrant's [Food Discovery](https://food-discovery.qdrant.tech/) demo relies on the dataset of food images from the Wolt
app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512
float numbers.
## Generation process
The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet:
```python
from PIL import Image
from sentence_transformers import SentenceTransformer
image_path = "5dbfd216-5cce-11eb-8122-de94874ad1c8_ns_takeaway_seelachs_ei_baguette.jpeg"
model = SentenceTransformer("clip-ViT-B-32")
embedding = model.encode(Image.open(image_path))
``` | Qdrant/wolt-food-clip-ViT-B-32-embeddings | [
"task_categories:feature-extraction",
"size_categories:1M<n<10M",
"language:en",
"region:us"
] | 2024-02-01T10:21:25+00:00 | {"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["feature-extraction"], "pretty_name": "clip-ViT-V-32 embeddings of the Wolt food images"} | 2024-02-01T10:54:19+00:00 | [] | [
"en"
] | TAGS
#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #region-us
|
# wolt-food-clip-ViT-B-32-embeddings
Qdrant's Food Discovery demo relies on the dataset of food images from the Wolt
app. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512
float numbers.
## Generation process
The embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet:
| [
"# wolt-food-clip-ViT-B-32-embeddings\n\nQdrant's Food Discovery demo relies on the dataset of food images from the Wolt \napp. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 \nfloat numbers.",
"## Generation process\n\nThe embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet:"
] | [
"TAGS\n#task_categories-feature-extraction #size_categories-1M<n<10M #language-English #region-us \n",
"# wolt-food-clip-ViT-B-32-embeddings\n\nQdrant's Food Discovery demo relies on the dataset of food images from the Wolt \napp. Each point in the collection represents a dish with a single image. The image is represented as a vector of 512 \nfloat numbers.",
"## Generation process\n\nThe embeddings generated with clip-ViT-B-32 model have been generated using the following code snippet:"
] |
763636c80109689de7dda88cbb9c55df21bcde7f | # Dataset Card for "lmind_hotpot_train300_eval100_v1_docidx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train300_eval100_v1_docidx | [
"region:us"
] | 2024-02-01T10:26:45+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 51441, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 312070, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 16148, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 104950, "num_examples": 100}, {"name": "all_docs", "num_bytes": 361191, "num_examples": 797}, {"name": "all_docs_eval", "num_bytes": 361140, "num_examples": 797}, {"name": "train", "num_bytes": 361191, "num_examples": 797}, {"name": "validation", "num_bytes": 361140, "num_examples": 797}], "download_size": 1211839, "dataset_size": 1929271}} | 2024-02-01T10:27:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train300_eval100_v1_docidx"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_docidx\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train300_eval100_v1_docidx\"\n\nMore Information needed"
] |
b560b78b3a18258967501aeda66c252c9b4ba7ab |
# Gazzetta Ufficiale 👩🏻⚖️⚖️🏛️📜
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i0.wp.com/www.assiv.it/wp-content/uploads/2020/11/Gazzetta-ufficiale-002-1920x960-1.jpg" alt="Mii-LLM" style="width: 50%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
(wip 🚧)
La Gazzetta Ufficiale della Repubblica Italiana, quale fonte ufficiale di conoscenza delle norme in vigore in Italia e strumento di diffusione, informazione e ufficializzazione di testi legislativi, atti pubblici e privati, è edita dall’Istituto Poligrafico e Zecca dello Stato e pubblicata in collaborazione con il Ministero della Giustizia, il quale provvede alla direzione e redazione della stessa.
L'Istituto Poligrafico e Zecca dello Stato S.p.A. promuove la più ampia fruibilità della Gazzetta Ufficiale della Repubblica Italiana in formato digitale.
Si segnala che l'unico testo definitivo è quello pubblicato sulla Gazzetta Ufficiale a mezzo stampa, che prevale in caso di discordanza. La riproduzione dei testi forniti nel formato elettronico è consentita purché venga menzionata la fonte, il carattere non autentico e gratuito. | mii-llm/gazzetta-ufficiale | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:it",
"law",
"region:us"
] | 2024-02-01T11:41:49+00:00 | {"language": ["it"], "task_categories": ["text-generation", "fill-mask"], "pretty_name": "Gazzetta Ufficiale", "tags": ["law"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "field1", "dtype": "string"}, {"name": "field2", "dtype": "string"}, {"name": "eiv", "dtype": "string"}, {"name": "about", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 867812928, "num_examples": 272578}], "download_size": 442290071, "dataset_size": 867812928}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-07T18:20:47+00:00 | [] | [
"it"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #language-Italian #law #region-us
|
# Gazzetta Ufficiale ️️️
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="URL alt="Mii-LLM" style="width: 50%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
(wip )
La Gazzetta Ufficiale della Repubblica Italiana, quale fonte ufficiale di conoscenza delle norme in vigore in Italia e strumento di diffusione, informazione e ufficializzazione di testi legislativi, atti pubblici e privati, è edita dall’Istituto Poligrafico e Zecca dello Stato e pubblicata in collaborazione con il Ministero della Giustizia, il quale provvede alla direzione e redazione della stessa.
L'Istituto Poligrafico e Zecca dello Stato S.p.A. promuove la più ampia fruibilità della Gazzetta Ufficiale della Repubblica Italiana in formato digitale.
Si segnala che l'unico testo definitivo è quello pubblicato sulla Gazzetta Ufficiale a mezzo stampa, che prevale in caso di discordanza. La riproduzione dei testi forniti nel formato elettronico è consentita purché venga menzionata la fonte, il carattere non autentico e gratuito. | [
"# Gazzetta Ufficiale ️️️\n\n<div style=\"width: auto; margin-left: auto; margin-right: auto\">\n<img src=\"URL alt=\"Mii-LLM\" style=\"width: 50%; min-width: 400px; display: block; margin: auto;\">\n</div>\n<hr style=\"margin-top: 1.0em; margin-bottom: 1.0em;\">\n\n(wip )\n\nLa Gazzetta Ufficiale della Repubblica Italiana, quale fonte ufficiale di conoscenza delle norme in vigore in Italia e strumento di diffusione, informazione e ufficializzazione di testi legislativi, atti pubblici e privati, è edita dall’Istituto Poligrafico e Zecca dello Stato e pubblicata in collaborazione con il Ministero della Giustizia, il quale provvede alla direzione e redazione della stessa.\n\nL'Istituto Poligrafico e Zecca dello Stato S.p.A. promuove la più ampia fruibilità della Gazzetta Ufficiale della Repubblica Italiana in formato digitale.\n\nSi segnala che l'unico testo definitivo è quello pubblicato sulla Gazzetta Ufficiale a mezzo stampa, che prevale in caso di discordanza. La riproduzione dei testi forniti nel formato elettronico è consentita purché venga menzionata la fonte, il carattere non autentico e gratuito."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #language-Italian #law #region-us \n",
"# Gazzetta Ufficiale ️️️\n\n<div style=\"width: auto; margin-left: auto; margin-right: auto\">\n<img src=\"URL alt=\"Mii-LLM\" style=\"width: 50%; min-width: 400px; display: block; margin: auto;\">\n</div>\n<hr style=\"margin-top: 1.0em; margin-bottom: 1.0em;\">\n\n(wip )\n\nLa Gazzetta Ufficiale della Repubblica Italiana, quale fonte ufficiale di conoscenza delle norme in vigore in Italia e strumento di diffusione, informazione e ufficializzazione di testi legislativi, atti pubblici e privati, è edita dall’Istituto Poligrafico e Zecca dello Stato e pubblicata in collaborazione con il Ministero della Giustizia, il quale provvede alla direzione e redazione della stessa.\n\nL'Istituto Poligrafico e Zecca dello Stato S.p.A. promuove la più ampia fruibilità della Gazzetta Ufficiale della Repubblica Italiana in formato digitale.\n\nSi segnala che l'unico testo definitivo è quello pubblicato sulla Gazzetta Ufficiale a mezzo stampa, che prevale in caso di discordanza. La riproduzione dei testi forniti nel formato elettronico è consentita purché venga menzionata la fonte, il carattere non autentico e gratuito."
] |
22e4361e596c143f30f44aa72753a8b1afc14082 | # Dataset Card for "lmind_nq_train300_eval100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100 | [
"region:us"
] | 2024-02-01T11:45:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 226733, "num_examples": 300}, {"name": "validation", "num_bytes": 74768, "num_examples": 100}], "download_size": 145226, "dataset_size": 1157759}} | 2024-02-01T11:49:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100\"\n\nMore Information needed"
] |
d50470f3fabe07b972b5e5274ace6b3550705c5f | # Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zicsx/mC4-Hindi-Cleaned-2.0 | [
"size_categories:10M<n<100M",
"language:hi",
"license:apache-2.0",
"mC4",
"Common Crawl",
"region:us"
] | 2024-02-01T11:50:18+00:00 | {"language": ["hi"], "license": "apache-2.0", "size_categories": ["10M<n<100M"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3691890468.713361, "num_examples": 3913635}], "download_size": 6432440886, "dataset_size": 3691890468.713361}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["mC4", "Common Crawl"]} | 2024-02-01T14:17:37+00:00 | [] | [
"hi"
] | TAGS
#size_categories-10M<n<100M #language-Hindi #license-apache-2.0 #mC4 #Common Crawl #region-us
| # Dataset Card for "test"
More Information needed | [
"# Dataset Card for \"test\"\n\nMore Information needed"
] | [
"TAGS\n#size_categories-10M<n<100M #language-Hindi #license-apache-2.0 #mC4 #Common Crawl #region-us \n",
"# Dataset Card for \"test\"\n\nMore Information needed"
] |
6f0f775b0948a774e97954a61230b4b8a235f7bd | # Dataset Card for "lmind_nq_train300_eval100_v1_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100_v1_qa | [
"region:us"
] | 2024-02-01T11:54:10+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 34574, "num_examples": 300}, {"name": "validation", "num_bytes": 11254, "num_examples": 100}], "download_size": 597904, "dataset_size": 902086}} | 2024-02-01T11:54:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100_v1_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100_v1_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100_v1_qa\"\n\nMore Information needed"
] |
fd20f787161d6c6bd6df2d061721632f1b4e79f7 | # Dataset Card for "lmind_nq_train300_eval100_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100_v1_doc | [
"region:us"
] | 2024-02-01T11:54:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 254478, "num_examples": 392}, {"name": "validation", "num_bytes": 254478, "num_examples": 392}], "download_size": 890789, "dataset_size": 1365214}} | 2024-02-01T11:55:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100_v1_doc"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100_v1_doc\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100_v1_doc\"\n\nMore Information needed"
] |
223a8c4539ef382a8f860afc1ebe74f2c7f77463 | # Dataset Card for "lmind_nq_train300_eval100_v1_docidx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100_v1_docidx | [
"region:us"
] | 2024-02-01T11:55:05+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 254478, "num_examples": 392}, {"name": "validation", "num_bytes": 254451, "num_examples": 392}], "download_size": 894547, "dataset_size": 1365187}} | 2024-02-01T11:55:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100_v1_docidx"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100_v1_docidx\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100_v1_docidx\"\n\nMore Information needed"
] |
8fa75359e1ff1f0041109f7f0a4d5e6c6269d751 | # Dataset Card for "lmind_nq_train300_eval100_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100_v1_doc_qa | [
"region:us"
] | 2024-02-01T11:55:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 289052, "num_examples": 692}, {"name": "validation", "num_bytes": 11254, "num_examples": 100}], "download_size": 756998, "dataset_size": 1156564}} | 2024-02-01T11:55:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100_v1_doc_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100_v1_doc_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100_v1_doc_qa\"\n\nMore Information needed"
] |
eefa14b3efa56884882258cfbce728dd6ca3f7b7 | # Dataset Card for "lmind_nq_train300_eval100_v1_recite_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100_v1_recite_qa | [
"region:us"
] | 2024-02-01T11:55:59+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 481211, "num_examples": 692}, {"name": "validation", "num_bytes": 74768, "num_examples": 100}], "download_size": 918077, "dataset_size": 1412237}} | 2024-02-01T11:56:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100_v1_recite_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100_v1_recite_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100_v1_recite_qa\"\n\nMore Information needed"
] |
9ff60aeaceb9bee8849d5bdc5e03c74f52d4ff12 | # Dataset Card for "lmind_nq_train300_eval100_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train300_eval100_v1_reciteonly_qa | [
"region:us"
] | 2024-02-01T11:56:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 34574, "num_examples": 300}, {"name": "train_recite_qa", "num_bytes": 226733, "num_examples": 300}, {"name": "eval_qa", "num_bytes": 11254, "num_examples": 100}, {"name": "eval_recite_qa", "num_bytes": 74768, "num_examples": 100}, {"name": "all_docs", "num_bytes": 254478, "num_examples": 392}, {"name": "all_docs_eval", "num_bytes": 254451, "num_examples": 392}, {"name": "train", "num_bytes": 226733, "num_examples": 300}, {"name": "validation", "num_bytes": 74768, "num_examples": 100}], "download_size": 760921, "dataset_size": 1157759}} | 2024-02-01T11:57:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train300_eval100_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train300_eval100_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train300_eval100_v1_reciteonly_qa\"\n\nMore Information needed"
] |
a7108a2170dc9de84a1fd9294815ac137665e3c3 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | introspector/unimath | [
"region:us"
] | 2024-02-01T12:01:49+00:00 | {"license": "openrail", "task_categories": ["summarization"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": ["/batch2/trace_log.20.2024.02.01-07.27.44.org", "/batch2/trace_log.19.2024.02.01-07.27.44.org", "/batch2/trace_log.18.2024.02.01-07.27.44.org", "/batch2/trace_log.17.2024.02.01-07.27.44.org", "/batch2/trace_log.16.2024.02.01-07.27.44.org", "/batch2/trace_log.15.2024.02.01-07.27.44.org", "/batch2/trace_log.14.2024.02.01-07.27.44.org", "/batch2/trace_log.13.2024.02.01-07.27.44.org", "/batch2/trace_log.12.2024.02.01-07.27.44.org", "/batch2/trace_log.11.2024.02.01-07.27.44.org", "/batch2/trace_log.10.2024.02.01-07.27.44.org", "/batch2/trace_log.9.2024.02.01-07.27.44.org", "/batch2/trace_log.8.2024.02.01-07.27.44.org", "/batch2/trace_log.7.2024.02.01-07.27.44.org", "/batch2/trace_log.6.2024.02.01-07.27.44.org", "/batch2/trace_log.5.2024.02.01-07.27.44.org", "/batch2/trace_log.4.2024.02.01-07.27.44.org", "/batch2/trace_log.3.2024.02.01-07.27.44.org", "/batch2/trace_log.2.2024.02.01-07.27.44.org", "/batch2/trace_log.1.2024.02.01-07.27.44.org"]}, {"split": "test", "path": ["/batch2/trace_log.9.2024.02.01-07.27.44.org"]}]}]} | 2024-02-12T13:33:08+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
90b5e838f243a0f98726571644c29f23bb3c354e | # Dataset Card for "dataset_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | S-AA-D/dataset_0 | [
"region:us"
] | 2024-02-01T12:25:29+00:00 | {"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 161158, "num_examples": 114}], "download_size": 39841, "dataset_size": 161158}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T13:01:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dataset_0"
More Information needed | [
"# Dataset Card for \"dataset_0\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset_0\"\n\nMore Information needed"
] |
bd31777d54b76b29878a396e4d6f7183eeda72f8 |
# National Climate Targets Training Dataset – Climate Policy Radar
A dataset of climate targets made by national governments in their laws, policies and UNFCCC submissions which has been used to train a classifier. Text was sourced from the [Climate Policy Radar database](https://app.climatepolicyradar.org).
We define a target as an aim to achieve a specific outcome, that is quantifiable and is given a deadline.
This dataset distinguishes between different types of targets:
- **Reduction** (a.k.a. emissions reduction): a target referring to a reduction in greenhouse gas emissions, either economy-wide or for a sector.
- **Net zero**: a commitment to balance GHG emissions with removal, effectively reducing the net emissions to zero.
- **Other**: those that do not fit into the Reduction or Net Zero category but satisfy our definition of a target, e.g. renewable energy targets.
*IMPORTANT NOTE:* this dataset has been used to train a machine learning model, and **is not a list of all climate targets published by national governments**.
## Dataset Description
This dataset includes 2,610 text passages containing 1,193 target mentions annotated in a multilabel setting: one text passage can be assigned to 0 or more target types. This breaks down as follows.
| | Number of passages |
|:--------------|--------:|
| NZT | 203 |
| Reduction | 359 |
| Other | 631 |
| No Annotation | 1,584 |
It was annotated by 3 domain-experts with steps taken to ensure consistency by measuring inter-annotator agreement. Annotator `2` is a data scientist, with a combination of sampling negatives and errors caught during posthoc reviews.
All text is in English: the `translated` column describes whether it has been translated from another language using the Google Cloud Translation API. Further to the text and annotations, we also include characteristics of the documents we use to make equity calculations and anonymised assignment of annotations to annotators.
For more information on the dataset and its creation see **our paper TBA**.
## License
Our dataset is licensed as [CC by 4.0](https://creativecommons.org/licenses/by/4.0/).
Please read our [Terms of Use](https://app.climatepolicyradar.org/terms-of-use), including any specific terms relevant to commercial use. Contact [email protected] with any questions.
## Links
<!-- Provide the basic links for the dataset. -->
- **Repository:** [coming soon]
- **Paper** [coming soon]
## Citation
[Coming soon]
## Authors & Contact
Climate Policy Radar team: Matyas Juhasz, Tina Marchand, Roshan Melwani, Kalyan Dutia, Sarah Goodenough, Harrison Pim, and Henry Franks.
https://climatepolicyradar.org | ClimatePolicyRadar/national-climate-targets | [
"license:cc-by-4.0",
"region:us"
] | 2024-02-01T12:46:18+00:00 | {"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "annotation_agent", "dtype": "int64"}, {"name": "geography", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "translated", "dtype": "bool"}, {"name": "annotation_NZT", "dtype": "int64"}, {"name": "annotation_Reduction", "dtype": "int64"}, {"name": "annotation_Other", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2912069, "num_examples": 2610}], "download_size": 1522649, "dataset_size": 2912069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-05T15:06:50+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| National Climate Targets Training Dataset – Climate Policy Radar
================================================================
A dataset of climate targets made by national governments in their laws, policies and UNFCCC submissions which has been used to train a classifier. Text was sourced from the Climate Policy Radar database.
We define a target as an aim to achieve a specific outcome, that is quantifiable and is given a deadline.
This dataset distinguishes between different types of targets:
* Reduction (a.k.a. emissions reduction): a target referring to a reduction in greenhouse gas emissions, either economy-wide or for a sector.
* Net zero: a commitment to balance GHG emissions with removal, effectively reducing the net emissions to zero.
* Other: those that do not fit into the Reduction or Net Zero category but satisfy our definition of a target, e.g. renewable energy targets.
*IMPORTANT NOTE:* this dataset has been used to train a machine learning model, and is not a list of all climate targets published by national governments.
Dataset Description
-------------------
This dataset includes 2,610 text passages containing 1,193 target mentions annotated in a multilabel setting: one text passage can be assigned to 0 or more target types. This breaks down as follows.
It was annotated by 3 domain-experts with steps taken to ensure consistency by measuring inter-annotator agreement. Annotator '2' is a data scientist, with a combination of sampling negatives and errors caught during posthoc reviews.
All text is in English: the 'translated' column describes whether it has been translated from another language using the Google Cloud Translation API. Further to the text and annotations, we also include characteristics of the documents we use to make equity calculations and anonymised assignment of annotations to annotators.
For more information on the dataset and its creation see our paper TBA.
License
-------
Our dataset is licensed as CC by 4.0.
Please read our Terms of Use, including any specific terms relevant to commercial use. Contact partners@URL with any questions.
Links
-----
* Repository: [coming soon]
* Paper [coming soon]
[Coming soon]
Authors & Contact
-----------------
Climate Policy Radar team: Matyas Juhasz, Tina Marchand, Roshan Melwani, Kalyan Dutia, Sarah Goodenough, Harrison Pim, and Henry Franks.
URL
| [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] |
561c2d93d19af8c2e6656f3583e744a224298474 | NPI and Identification
🆔 NPI: National Provider Identifier, a unique identification number for covered health care providers.
🧑 EntityTypeCode: Indicates whether the provider is an individual (1) or an organization (2).
🔁 ReplacementNPI: NPI that replaces a previous NPI, if applicable.
💼 EmployerIdentificationNumberEIN: Tax identification number for the provider, if they are an organization.
Provider Names and Credentials
🏢 ProviderOrganizationNameLegalBusinessName: Legal business name of the provider, if an organization.
👨👩👧 ProviderLastNameLegalName: Last (family) name of the provider, if an individual.
📛 ProviderFirstName: First (given) name of the provider, if an individual.
🌟 ProviderMiddleName: Middle name of the provider, if applicable.
📌 ProviderNamePrefixText: Prefix to the provider's name (e.g., Dr., Mr., Ms.).
🏷️ ProviderNameSuffixText: Suffix to the provider's name (e.g., Jr., Sr., III).
🎓 ProviderCredentialText: Credentials of the provider (e.g., MD, DDS, RN).
Other Provider Information
🏥 ProviderOtherOrganizationName: Other organization name used by the provider.
🔠 ProviderOtherOrganizationNameTypeCode: Type code for the other organization name.
🔄 ProviderOtherLastName: Other last name used by the provider.
➡️ ProviderOtherFirstName: Other first name used by the provider.
🆗 ProviderOtherMiddleName: Other middle name used by the provider.
🔼 ProviderOtherNamePrefixText: Other prefix to the provider's name.
🔽 ProviderOtherNameSuffixText: Other suffix to the provider's name.
📜 ProviderOtherCredentialText: Other credentials used by the provider.
🈯 ProviderOtherLastNameTypeCode: Type code for the other last name used.
Business Mailing Address
📫 ProviderFirstLineBusinessMailingAddress: First line of the provider's business mailing address.
📬 ProviderSecondLineBusinessMailingAddress: Second line of the provider's business mailing address.
🏙️ ProviderBusinessMailingAddressCityName: City name of the provider's business mailing address.
📍 ProviderBusinessMailingAddressStateName: State name of the provider's business mailing address.
📮 ProviderBusinessMailingAddressPostalCode: Postal code of the provider's business mailing address.
🌍 ProviderBusinessMailingAddressCountryCodeIfoutsideUS: Country code if outside the U.S.
📞 ProviderBusinessMailingAddressTelephoneNumber: Telephone number for the business mailing address.
📠 ProviderBusinessMailingAddressFaxNumber: Fax number for the business mailing address.
Business Practice Location Address
🏠 ProviderFirstLineBusinessPracticeLocationAddress: First line of the provider's business practice location address.
🏡 ProviderSecondLineBusinessPracticeLocationAddress: Second line of the provider's business practice location address.
🌆 ProviderBusinessPracticeLocationAddressCityName: City name of the provider's practice location.
🗺️ ProviderBusinessPracticeLocationAddressStateName: State name of the provider's practice location.
🛂 ProviderBusinessPracticeLocationAddressPostalCode: Postal code of the provider's practice location.
🌏 ProviderBusinessPracticeLocationAddressCountryCodeIfoutsideUS: Country code if the practice location is outside the U.S.
📲 ProviderBusinessPracticeLocationAddressTelephoneNumber: Telephone number for the practice location.
🖨️ ProviderBusinessPracticeLocationAddressFaxNumber: Fax number for the practice location.
Dates and Status
📅 ProviderEnumerationDate: The date the provider was first added to the NPI registry.
🔄 LastUpdateDate: The date of the last update to the provider's information.
❌ NPIDeactivationReasonCode: Reason code for NPI deactivation, if applicable.
🔚 NPIDeactivationDate: Date of NPI deactivation, if applicable.
🔙 NPIReactivationDate: Date of NPI reactivation, if applicable.
Provider Details
Provider Details
🚹🚺 ProviderGenderCode: Gender code of the provider (if an individual).
👤 AuthorizedOfficialLastName: Last name of the authorized official.
👤 AuthorizedOfficialFirstName: First name of the authorized official.
👤 AuthorizedOfficialMiddleName: Middle name of the authorized official.
📝 AuthorizedOfficialTitleorPosition: Title or position of the authorized official.
📞 AuthorizedOfficialTelephoneNumber: Telephone number of the authorized official.
Licensing and Taxonomy
(For brevity, the descriptions for Healthcare Provider Taxonomy Codes, Provider License Numbers, and State Codes are grouped together due to their repetitive nature across multiple entries.)
🧬 HealthcareProviderTaxonomyCode: Code indicating the provider's specific type or classification of health care supply.
🔑 ProviderLicenseNumber: License number assigned to the provider.
🗺️ ProviderLicenseNumberStateCode: State code where the provider is licensed.
🔀 HealthcareProviderPrimaryTaxonomySwitch: Indicates if the taxonomy code is the provider's primary code.
Other Identifiers
(Repeated for multiple other identifiers with type codes, states, and issuers.)
🔖 OtherProviderIdentifier: Other identifiers used to identify the provider.
🆔 OtherProviderIdentifierTypeCode: Type code of the other identifier.
🗺️ OtherProviderIdentifierState: State code related to the other identifier.
🏢 OtherProviderIdentifierIssuer: Issuer of the other identifier.
Organizational Details and Certification
❓ IsSoleProprietor: Indicates if the provider is a sole proprietor.
🏢 IsOrganizationSubpart: Indicates if the provider is a subpart of an organization.
🏢 ParentOrganizationLBN: Legal business name of the parent organization.
💼 ParentOrganizationTIN: Tax Identification Number of the parent organization.
📛 AuthorizedOfficialNamePrefixText: Prefix of the authorized official's name.
🏷️ AuthorizedOfficialNameSuffixText: Suffix of the authorized official's name.
🎓 AuthorizedOfficialCredentialText: Credentials of the authorized official.
🧩 HealthcareProviderTaxonomyGroup: Group taxonomy codes indicating shared characteristics.
This comprehensive outline provides a detailed understanding of the data structure, making it easier for educators and students alike to navigate and utilize the information effectively in various learning scenarios.
| awacke1/NPI-20240107 | [
"license:mit",
"region:us"
] | 2024-02-01T13:01:59+00:00 | {"license": "mit"} | 2024-02-02T02:39:47+00:00 | [] | [] | TAGS
#license-mit #region-us
| NPI and Identification
🆔 NPI: National Provider Identifier, a unique identification number for covered health care providers.
EntityTypeCode: Indicates whether the provider is an individual (1) or an organization (2).
ReplacementNPI: NPI that replaces a previous NPI, if applicable.
EmployerIdentificationNumberEIN: Tax identification number for the provider, if they are an organization.
Provider Names and Credentials
ProviderOrganizationNameLegalBusinessName: Legal business name of the provider, if an organization.
ProviderLastNameLegalName: Last (family) name of the provider, if an individual.
ProviderFirstName: First (given) name of the provider, if an individual.
ProviderMiddleName: Middle name of the provider, if applicable.
ProviderNamePrefixText: Prefix to the provider's name (e.g., Dr., Mr., Ms.).
️ ProviderNameSuffixText: Suffix to the provider's name (e.g., Jr., Sr., III).
ProviderCredentialText: Credentials of the provider (e.g., MD, DDS, RN).
Other Provider Information
ProviderOtherOrganizationName: Other organization name used by the provider.
ProviderOtherOrganizationNameTypeCode: Type code for the other organization name.
ProviderOtherLastName: Other last name used by the provider.
️ ProviderOtherFirstName: Other first name used by the provider.
🆗 ProviderOtherMiddleName: Other middle name used by the provider.
ProviderOtherNamePrefixText: Other prefix to the provider's name.
ProviderOtherNameSuffixText: Other suffix to the provider's name.
ProviderOtherCredentialText: Other credentials used by the provider.
🈯 ProviderOtherLastNameTypeCode: Type code for the other last name used.
Business Mailing Address
ProviderFirstLineBusinessMailingAddress: First line of the provider's business mailing address.
ProviderSecondLineBusinessMailingAddress: Second line of the provider's business mailing address.
️ ProviderBusinessMailingAddressCityName: City name of the provider's business mailing address.
ProviderBusinessMailingAddressStateName: State name of the provider's business mailing address.
ProviderBusinessMailingAddressPostalCode: Postal code of the provider's business mailing address.
ProviderBusinessMailingAddressCountryCodeIfoutsideUS: Country code if outside the U.S.
ProviderBusinessMailingAddressTelephoneNumber: Telephone number for the business mailing address.
ProviderBusinessMailingAddressFaxNumber: Fax number for the business mailing address.
Business Practice Location Address
ProviderFirstLineBusinessPracticeLocationAddress: First line of the provider's business practice location address.
ProviderSecondLineBusinessPracticeLocationAddress: Second line of the provider's business practice location address.
ProviderBusinessPracticeLocationAddressCityName: City name of the provider's practice location.
️ ProviderBusinessPracticeLocationAddressStateName: State name of the provider's practice location.
ProviderBusinessPracticeLocationAddressPostalCode: Postal code of the provider's practice location.
ProviderBusinessPracticeLocationAddressCountryCodeIfoutsideUS: Country code if the practice location is outside the U.S.
ProviderBusinessPracticeLocationAddressTelephoneNumber: Telephone number for the practice location.
️ ProviderBusinessPracticeLocationAddressFaxNumber: Fax number for the practice location.
Dates and Status
ProviderEnumerationDate: The date the provider was first added to the NPI registry.
LastUpdateDate: The date of the last update to the provider's information.
NPIDeactivationReasonCode: Reason code for NPI deactivation, if applicable.
NPIDeactivationDate: Date of NPI deactivation, if applicable.
NPIReactivationDate: Date of NPI reactivation, if applicable.
Provider Details
Provider Details
ProviderGenderCode: Gender code of the provider (if an individual).
AuthorizedOfficialLastName: Last name of the authorized official.
AuthorizedOfficialFirstName: First name of the authorized official.
AuthorizedOfficialMiddleName: Middle name of the authorized official.
AuthorizedOfficialTitleorPosition: Title or position of the authorized official.
AuthorizedOfficialTelephoneNumber: Telephone number of the authorized official.
Licensing and Taxonomy
(For brevity, the descriptions for Healthcare Provider Taxonomy Codes, Provider License Numbers, and State Codes are grouped together due to their repetitive nature across multiple entries.)
HealthcareProviderTaxonomyCode: Code indicating the provider's specific type or classification of health care supply.
ProviderLicenseNumber: License number assigned to the provider.
️ ProviderLicenseNumberStateCode: State code where the provider is licensed.
HealthcareProviderPrimaryTaxonomySwitch: Indicates if the taxonomy code is the provider's primary code.
Other Identifiers
(Repeated for multiple other identifiers with type codes, states, and issuers.)
OtherProviderIdentifier: Other identifiers used to identify the provider.
🆔 OtherProviderIdentifierTypeCode: Type code of the other identifier.
️ OtherProviderIdentifierState: State code related to the other identifier.
OtherProviderIdentifierIssuer: Issuer of the other identifier.
Organizational Details and Certification
IsSoleProprietor: Indicates if the provider is a sole proprietor.
IsOrganizationSubpart: Indicates if the provider is a subpart of an organization.
ParentOrganizationLBN: Legal business name of the parent organization.
ParentOrganizationTIN: Tax Identification Number of the parent organization.
AuthorizedOfficialNamePrefixText: Prefix of the authorized official's name.
️ AuthorizedOfficialNameSuffixText: Suffix of the authorized official's name.
AuthorizedOfficialCredentialText: Credentials of the authorized official.
HealthcareProviderTaxonomyGroup: Group taxonomy codes indicating shared characteristics.
This comprehensive outline provides a detailed understanding of the data structure, making it easier for educators and students alike to navigate and utilize the information effectively in various learning scenarios.
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
5375d7c5b4af3804b29152bf332f3539b46bcaca | # Dataset Card for "ICPR_testing_check"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nourheshamshaheen/ICPR_testing_check | [
"region:us"
] | 2024-02-01T13:22:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "area", "1": "heatmap", "2": "horizontal bar", "3": "horizontal interval", "4": "line", "5": "manhattan", "6": "map", "7": "pie", "8": "scatter", "9": "scatter-line", "10": "surface", "11": "venn", "12": "vertical bar", "13": "vertical box", "14": "vertical interval"}}}}], "splits": [{"name": "train", "num_bytes": 815174169.98, "num_examples": 11388}], "download_size": 716823350, "dataset_size": 815174169.98}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T13:28:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ICPR_testing_check"
More Information needed | [
"# Dataset Card for \"ICPR_testing_check\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ICPR_testing_check\"\n\nMore Information needed"
] |
4c8e3d70cc6b3bd4bce70ce89e1b2c502c54e1eb |
# TemplateGSM Dataset
The TemplateGSM dataset is a novel and extensive collection designed for advancing the study and application of mathematical reasoning within the realm of artificial intelligence. This dataset is crafted to challenge and evaluate the capabilities of language models in understanding and generating solutions to mathematical problems derived from a set of predefined problem templates using examples from the GSM8K dataset as prototypes. Each template encapsulates a unique mathematical problem structure, offering a diverse array of challenges that span various domains of mathematics.
GitHub Homepage: [[link]](https://github.com/yifanzhang-pro/syntax-semantics)
## Objective
TemplateGSM aims to serve as a benchmark for:
- Assessing language models' proficiency in mathematical reasoning and symbolic computation.
- Training and fine-tuning language models to improve their performance in generating accurate and logically sound mathematical solutions.
- Encouraging the development of models capable of understanding and solving complex mathematical problems, thereby bridging the gap between natural language processing and mathematical reasoning.
## Dataset Structure
TemplateGSM is organized into configurations based on the volume of problems generated from each template:
### Configurations
- **templategsm-32-10k**: Contains 10,000 problems generated from each of the 32 templates, totaling over 320,000 individual problems.
- **templategsm-32-100k**: Expands each template's reach by generating 100,000 problems, culminating in a dataset exceeding 3.2 million problems.
### Data Fields
Each problem in the dataset includes the following fields:
- `problem`: The problem statement.
- `solution_code`: A commented solution code that solves the problem in Python.
- `solution_wocode`: The solution in natural language without the use of code.
- `result`: The final answer to the problem.
- `template_name`: This field indicates the template from which the problem was generated, e.g., `gsm0001-1`.
- `idx`: An index unique to each problem within its template.
## How to Use
```XML
configs:
- config_name: templategsm-32-10k
data_files: data/10k/0000-0031/*.jsonl
default: true
- config_name: templategsm-32-100k
data_files: data/100k/0000-0031/*.jsonl
```
To access the TemplateGSM dataset, you can use the Huggingface `datasets` library:
```python
from datasets import load_dataset
# Load a specific configuration
dataset = load_dataset("math-ai/TemplateGSM", "templategsm-32-10k") # or any valid config_name
```
## License
This dataset is made available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
## Citation
If you utilize the TemplateGSM dataset in your research or application, please consider citing it (GitHub Homepage: [[link]](https://github.com/yifanzhang-pro/syntax-semantics)):
```bibtex
@misc{templatemath2024,
title={TemplateMath: Syntactic Data Generation for Mathematical Problems},
author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},
year={2024},
}
| math-ai/TemplateGSM | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1B<n<10B",
"language:en",
"license:cc-by-4.0",
"mathematical-reasoning",
"reasoning",
"finetuning",
"pretraining",
"llm",
"region:us"
] | 2024-02-01T13:22:35+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1B<n<10B"], "task_categories": ["text-generation", "question-answering"], "pretty_name": "TemplateGSM", "configs": [{"config_name": "templategsm-32-10k", "data_files": "data/10k/0000-0031/*.jsonl", "default": true}, {"config_name": "templategsm-32-100k", "data_files": "data/100k/0000-0031/*.jsonl"}], "tags": ["mathematical-reasoning", "reasoning", "finetuning", "pretraining", "llm"]} | 2024-02-02T12:14:42+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-question-answering #size_categories-1B<n<10B #language-English #license-cc-by-4.0 #mathematical-reasoning #reasoning #finetuning #pretraining #llm #region-us
|
# TemplateGSM Dataset
The TemplateGSM dataset is a novel and extensive collection designed for advancing the study and application of mathematical reasoning within the realm of artificial intelligence. This dataset is crafted to challenge and evaluate the capabilities of language models in understanding and generating solutions to mathematical problems derived from a set of predefined problem templates using examples from the GSM8K dataset as prototypes. Each template encapsulates a unique mathematical problem structure, offering a diverse array of challenges that span various domains of mathematics.
GitHub Homepage: [[link]](URL
## Objective
TemplateGSM aims to serve as a benchmark for:
- Assessing language models' proficiency in mathematical reasoning and symbolic computation.
- Training and fine-tuning language models to improve their performance in generating accurate and logically sound mathematical solutions.
- Encouraging the development of models capable of understanding and solving complex mathematical problems, thereby bridging the gap between natural language processing and mathematical reasoning.
## Dataset Structure
TemplateGSM is organized into configurations based on the volume of problems generated from each template:
### Configurations
- templategsm-32-10k: Contains 10,000 problems generated from each of the 32 templates, totaling over 320,000 individual problems.
- templategsm-32-100k: Expands each template's reach by generating 100,000 problems, culminating in a dataset exceeding 3.2 million problems.
### Data Fields
Each problem in the dataset includes the following fields:
- 'problem': The problem statement.
- 'solution_code': A commented solution code that solves the problem in Python.
- 'solution_wocode': The solution in natural language without the use of code.
- 'result': The final answer to the problem.
- 'template_name': This field indicates the template from which the problem was generated, e.g., 'gsm0001-1'.
- 'idx': An index unique to each problem within its template.
## How to Use
To access the TemplateGSM dataset, you can use the Huggingface 'datasets' library:
## License
This dataset is made available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
If you utilize the TemplateGSM dataset in your research or application, please consider citing it (GitHub Homepage: [[link]](URL
'''bibtex
@misc{templatemath2024,
title={TemplateMath: Syntactic Data Generation for Mathematical Problems},
author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},
year={2024},
}
| [
"# TemplateGSM Dataset\n\nThe TemplateGSM dataset is a novel and extensive collection designed for advancing the study and application of mathematical reasoning within the realm of artificial intelligence. This dataset is crafted to challenge and evaluate the capabilities of language models in understanding and generating solutions to mathematical problems derived from a set of predefined problem templates using examples from the GSM8K dataset as prototypes. Each template encapsulates a unique mathematical problem structure, offering a diverse array of challenges that span various domains of mathematics.\n\nGitHub Homepage: [[link]](URL",
"## Objective\n\nTemplateGSM aims to serve as a benchmark for:\n- Assessing language models' proficiency in mathematical reasoning and symbolic computation.\n- Training and fine-tuning language models to improve their performance in generating accurate and logically sound mathematical solutions.\n- Encouraging the development of models capable of understanding and solving complex mathematical problems, thereby bridging the gap between natural language processing and mathematical reasoning.",
"## Dataset Structure\n\nTemplateGSM is organized into configurations based on the volume of problems generated from each template:",
"### Configurations\n\n- templategsm-32-10k: Contains 10,000 problems generated from each of the 32 templates, totaling over 320,000 individual problems.\n- templategsm-32-100k: Expands each template's reach by generating 100,000 problems, culminating in a dataset exceeding 3.2 million problems.",
"### Data Fields\n\nEach problem in the dataset includes the following fields:\n- 'problem': The problem statement.\n- 'solution_code': A commented solution code that solves the problem in Python.\n- 'solution_wocode': The solution in natural language without the use of code.\n- 'result': The final answer to the problem.\n- 'template_name': This field indicates the template from which the problem was generated, e.g., 'gsm0001-1'.\n- 'idx': An index unique to each problem within its template.",
"## How to Use\n\n\n\nTo access the TemplateGSM dataset, you can use the Huggingface 'datasets' library:",
"## License\n\nThis dataset is made available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.\n\nIf you utilize the TemplateGSM dataset in your research or application, please consider citing it (GitHub Homepage: [[link]](URL\n\n'''bibtex\n@misc{templatemath2024,\n title={TemplateMath: Syntactic Data Generation for Mathematical Problems},\n author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},\n year={2024},\n}"
] | [
"TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-1B<n<10B #language-English #license-cc-by-4.0 #mathematical-reasoning #reasoning #finetuning #pretraining #llm #region-us \n",
"# TemplateGSM Dataset\n\nThe TemplateGSM dataset is a novel and extensive collection designed for advancing the study and application of mathematical reasoning within the realm of artificial intelligence. This dataset is crafted to challenge and evaluate the capabilities of language models in understanding and generating solutions to mathematical problems derived from a set of predefined problem templates using examples from the GSM8K dataset as prototypes. Each template encapsulates a unique mathematical problem structure, offering a diverse array of challenges that span various domains of mathematics.\n\nGitHub Homepage: [[link]](URL",
"## Objective\n\nTemplateGSM aims to serve as a benchmark for:\n- Assessing language models' proficiency in mathematical reasoning and symbolic computation.\n- Training and fine-tuning language models to improve their performance in generating accurate and logically sound mathematical solutions.\n- Encouraging the development of models capable of understanding and solving complex mathematical problems, thereby bridging the gap between natural language processing and mathematical reasoning.",
"## Dataset Structure\n\nTemplateGSM is organized into configurations based on the volume of problems generated from each template:",
"### Configurations\n\n- templategsm-32-10k: Contains 10,000 problems generated from each of the 32 templates, totaling over 320,000 individual problems.\n- templategsm-32-100k: Expands each template's reach by generating 100,000 problems, culminating in a dataset exceeding 3.2 million problems.",
"### Data Fields\n\nEach problem in the dataset includes the following fields:\n- 'problem': The problem statement.\n- 'solution_code': A commented solution code that solves the problem in Python.\n- 'solution_wocode': The solution in natural language without the use of code.\n- 'result': The final answer to the problem.\n- 'template_name': This field indicates the template from which the problem was generated, e.g., 'gsm0001-1'.\n- 'idx': An index unique to each problem within its template.",
"## How to Use\n\n\n\nTo access the TemplateGSM dataset, you can use the Huggingface 'datasets' library:",
"## License\n\nThis dataset is made available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.\n\nIf you utilize the TemplateGSM dataset in your research or application, please consider citing it (GitHub Homepage: [[link]](URL\n\n'''bibtex\n@misc{templatemath2024,\n title={TemplateMath: Syntactic Data Generation for Mathematical Problems},\n author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},\n year={2024},\n}"
] |
180f904d08624119d382ea466a73872690927d42 | # Dataset Card for "OSCAR-2301-Hindi-Cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zicsx/OSCAR-2301-Hindi-Cleaned-2.0 | [
"size_categories:100K<n<1M",
"language:hi",
"license:apache-2.0",
"Common Crawl",
"region:us"
] | 2024-02-01T13:53:15+00:00 | {"language": ["hi"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 984551893.1710167, "num_examples": 672393}], "download_size": 1465140393, "dataset_size": 984551893.1710167}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["Common Crawl"]} | 2024-02-01T14:21:59+00:00 | [] | [
"hi"
] | TAGS
#size_categories-100K<n<1M #language-Hindi #license-apache-2.0 #Common Crawl #region-us
| # Dataset Card for "OSCAR-2301-Hindi-Cleaned"
More Information needed | [
"# Dataset Card for \"OSCAR-2301-Hindi-Cleaned\"\n\nMore Information needed"
] | [
"TAGS\n#size_categories-100K<n<1M #language-Hindi #license-apache-2.0 #Common Crawl #region-us \n",
"# Dataset Card for \"OSCAR-2301-Hindi-Cleaned\"\n\nMore Information needed"
] |
44dd62830e31ce8624f013181e9f99bd42f6c7f1 | # zsql-sqlite-dpo
This is a dataset for training machine learning models to convert natural
English language text into SQLite dialect SQL queries.
This dataset comprises 200,000 DPO pairs curated to support the rapid
development of text-to-SQL generation models. The uniqueness of this dataset
lies in its optimization process. The "chosen" field within each data pair
contains SQL queries that have been canonicalized, optimized, and which are
chosen from the candidate set which minimizes syntactic cyclomatic and
asymptotic complexity against the given schema.
Direct Preference Optimization (see [Rafailov et al,
2023](https://arxiv.org/abs/2305.18290J)) is a novel approach to refinement
learning from positive and negative samples to modify the behavior of
large-scale unsupervised language models to align with human preferences This
method simplifies the fine-tuning process, making it more stable and
computationally efficient without the need for extensive hyperparameter tuning
or LM sampling, and has been shown to effectively control model outputs,
matching or surpassing existing methods.
The source data is cleaned and filtered based on the following criteria:
- Remove queries which are not in English.
- Remove queries which are not valid SQL queries.
- Remove queries which are not executable against the given schema.
- Remove queries which are executed against tables with non-Latin characters.
- Remove queries which use features not supported by the given database.
- Remove long queries which contain domain-specific knowledge which cause model confusion.
- Remove queries which do not fit within a 4096 token context window.
## Usage
To load the dataset using the HuggingFace `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("zerolink/zsql-sqlite-dpo")
```
To use in model fine-tuning, apply the following chat tokenizer:
```python
tokenizer = AutoTokenizer.from_pretrained(model)
def tokenize(element):
schema = element["schema"]
question = element["question"]
answer = element["chosen"]
prompt = f"""
Using the schema:
{schema}
Generate SQL for the following question:
{question}
"""
system = "Translate English to SQLite SQL."
message = [
{"role": "system", "content": system},
{"role": "user", "content": prompt},
{"role": "assistant", "content": answer},
]
output = tokenizer.apply_chat_template(
message, add_generation_prompt=False, tokenize=True
)
return {"text": output}
```
## Fields
The fields in this dataset are as follows:
| Field Name | Description |
| ---------- | ----------------------------------------------------------------------------------------------- |
| schema | The schema of the database. |
| question | The natural language question. |
| chosen | The DPO preferred SQL query. |
| rejected | The DPO rejected SQL query. |
| weight | The weight of the query in the reward function. |
## Sources
This dataset is derived from the following sources:
| Source | License | External Link |
| ---------------------- | ------------ | -------------------------------------------------------------------------------------------------------------------- |
| wikisql | BSD 3-Clause | [https://github.com/salesforce/WikiSQL](https://github.com/salesforce/WikiSQL) |
| spider | CC-BY-SA-4.0 | [https://huggingface.co/datasets/spider](https://huggingface.co/datasets/spider) |
| sql_create_context | CC-BY-4.0 | [https://huggingface.co/datasets/b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) |
| squall | CC-BY-SA-4.0 | [https://github.com/tzshi/squall](https://github.com/tzshi/squall) |
| sede | Apache-2.0 | [https://github.com/hirupert/sede](https://github.com/hirupert/sede) |
| nvbench | MIT | [https://github.com/TsinghuaDatabaseGroup/nvBench](https://github.com/TsinghuaDatabaseGroup/nvBench) |
| imdb | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| advising | CC-BY-4.0 | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| atis | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| restaurants | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| scholar | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| yelp | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| academic | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) |
| criteria2sql | Apache-2.0 | [https://github.com/xiaojingyu92/Criteria2SQL](https://github.com/xiaojingyu92/Criteria2SQL) |
| eICU | CC-BY-4.0 | [https://github.com/glee4810/EHRSQL](https://github.com/glee4810/EHRSQL) |
| mimic_iii | CC-BY-4.0 | [https://github.com/glee4810/EHRSQL](https://github.com/glee4810/EHRSQL) |
| mimicsql_data | MIT | [https://github.com/wangpinggl/TREQS](https://github.com/wangpinggl/TREQS) |
| worldsoccerdatabase | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| whatcdhiphop | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| studentmathscore | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| pesticide | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| thehistoryofbaseball | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| uswildfires | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| geonucleardata | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
| greatermanchestercrime | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) |
Composition:

## License
This dataset is provided for academic and research purposes. Please adhere to
the specified license terms and conditions for usage and distribution.
| zerolink/zsql-sqlite-dpo | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"size_categories:100K<n<1M",
"language:en",
"license:other",
"dpo",
"text-to-sql",
"sql",
"arxiv:2305.18290",
"region:us"
] | 2024-02-01T13:57:38+00:00 | {"language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": "other", "size_categories": ["100K<n<1M"], "task_categories": ["text2text-generation", "text-generation"], "license_name": "other", "license_link": "https://github.com/zerolink-io/zsql-sqlite-dpo", "dataset_info": {"features": [{"name": "schema", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "weight", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 244244555.38278434, "num_examples": 234268}, {"name": "test", "num_bytes": 27138515.617215652, "num_examples": 26030}], "download_size": 86245275, "dataset_size": 271383071}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["dpo", "text-to-sql", "sql"]} | 2024-02-02T18:37:15+00:00 | [
"2305.18290"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #task_categories-text-generation #language_creators-crowdsourced #language_creators-expert-generated #size_categories-100K<n<1M #language-English #license-other #dpo #text-to-sql #sql #arxiv-2305.18290 #region-us
| zsql-sqlite-dpo
===============
This is a dataset for training machine learning models to convert natural
English language text into SQLite dialect SQL queries.
This dataset comprises 200,000 DPO pairs curated to support the rapid
development of text-to-SQL generation models. The uniqueness of this dataset
lies in its optimization process. The "chosen" field within each data pair
contains SQL queries that have been canonicalized, optimized, and which are
chosen from the candidate set which minimizes syntactic cyclomatic and
asymptotic complexity against the given schema.
Direct Preference Optimization (see Rafailov et al,
2023) is a novel approach to refinement
learning from positive and negative samples to modify the behavior of
large-scale unsupervised language models to align with human preferences This
method simplifies the fine-tuning process, making it more stable and
computationally efficient without the need for extensive hyperparameter tuning
or LM sampling, and has been shown to effectively control model outputs,
matching or surpassing existing methods.
The source data is cleaned and filtered based on the following criteria:
* Remove queries which are not in English.
* Remove queries which are not valid SQL queries.
* Remove queries which are not executable against the given schema.
* Remove queries which are executed against tables with non-Latin characters.
* Remove queries which use features not supported by the given database.
* Remove long queries which contain domain-specific knowledge which cause model confusion.
* Remove queries which do not fit within a 4096 token context window.
Usage
-----
To load the dataset using the HuggingFace 'datasets' library:
To use in model fine-tuning, apply the following chat tokenizer:
Fields
------
The fields in this dataset are as follows:
Sources
-------
This dataset is derived from the following sources:
Source: wikisql, License: BSD 3-Clause, External Link: URL
Source: spider, License: CC-BY-SA-4.0, External Link: URL
Source: sql\_create\_context, License: CC-BY-4.0, External Link: URL
Source: squall, License: CC-BY-SA-4.0, External Link: URL
Source: sede, License: Apache-2.0, External Link: URL
Source: nvbench, License: MIT, External Link: URL
Source: imdb, License: Not Found, External Link: URL
Source: advising, License: CC-BY-4.0, External Link: URL
Source: atis, License: Not Found, External Link: URL
Source: restaurants, License: Not Found, External Link: URL
Source: scholar, License: Not Found, External Link: URL
Source: yelp, License: Not Found, External Link: URL
Source: academic, License: Not Found, External Link: URL
Source: criteria2sql, License: Apache-2.0, External Link: URL
Source: eICU, License: CC-BY-4.0, External Link: URL
Source: mimic\_iii, License: CC-BY-4.0, External Link: URL
Source: mimicsql\_data, License: MIT, External Link: URL
Source: worldsoccerdatabase, License: CC-BY-SA-4.0, External Link: URL
Source: whatcdhiphop, License: CC-BY-SA-4.0, External Link: URL
Source: studentmathscore, License: CC-BY-SA-4.0, External Link: URL
Source: pesticide, License: CC-BY-SA-4.0, External Link: URL
Source: thehistoryofbaseball, License: CC-BY-SA-4.0, External Link: URL
Source: uswildfires, License: CC-BY-SA-4.0, External Link: URL
Source: geonucleardata, License: CC-BY-SA-4.0, External Link: URL
Source: greatermanchestercrime, License: CC-BY-SA-4.0, External Link: URL
Composition:
!Composition
License
-------
This dataset is provided for academic and research purposes. Please adhere to
the specified license terms and conditions for usage and distribution.
| [] | [
"TAGS\n#task_categories-text2text-generation #task_categories-text-generation #language_creators-crowdsourced #language_creators-expert-generated #size_categories-100K<n<1M #language-English #license-other #dpo #text-to-sql #sql #arxiv-2305.18290 #region-us \n"
] |
9e9bf80cf8db3b00ce1df5e152857914bb51fa87 | # Dataset Card for "OSCAR-2301-Hindi-Cleaned-2.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zicsx/OSCAR-2301-Hindi-Cleaned | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:hi",
"license:apache-2.0",
" OSCAR-2301",
"region:us"
] | 2024-02-01T14:01:57+00:00 | {"language": ["hi"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "tags": [" OSCAR-2301"]} | 2024-02-05T08:24:47+00:00 | [] | [
"hi"
] | TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Hindi #license-apache-2.0 # OSCAR-2301 #region-us
| # Dataset Card for "OSCAR-2301-Hindi-Cleaned-2.0"
More Information needed | [
"# Dataset Card for \"OSCAR-2301-Hindi-Cleaned-2.0\"\n\nMore Information needed"
] | [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Hindi #license-apache-2.0 # OSCAR-2301 #region-us \n",
"# Dataset Card for \"OSCAR-2301-Hindi-Cleaned-2.0\"\n\nMore Information needed"
] |
6ec3c1a372bb8de70a2ab4d680de4756fc04a353 |
# Dataset Card for Evaluation run of rufjdk5480/WestLake-dpo-train-sft-v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [rufjdk5480/WestLake-dpo-train-sft-v1](https://huggingface.co/rufjdk5480/WestLake-dpo-train-sft-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rufjdk5480__WestLake-dpo-train-sft-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T14:19:45.427052](https://huggingface.co/datasets/open-llm-leaderboard/details_rufjdk5480__WestLake-dpo-train-sft-v1/blob/main/results_2024-02-01T14-19-45.427052.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6228074117363218,
"acc_stderr": 0.03277947302632996,
"acc_norm": 0.622716710184122,
"acc_norm_stderr": 0.03346104331544058,
"mc1": 0.4810281517747858,
"mc1_stderr": 0.017490896405762346,
"mc2": 0.6779864188083103,
"mc2_stderr": 0.015438913814674077
},
"harness|arc:challenge|25": {
"acc": 0.6296928327645052,
"acc_stderr": 0.01411129875167495,
"acc_norm": 0.6578498293515358,
"acc_norm_stderr": 0.013864152159177275
},
"harness|hellaswag|10": {
"acc": 0.6848237402907787,
"acc_stderr": 0.004636365534819763,
"acc_norm": 0.8575980880302728,
"acc_norm_stderr": 0.0034874768122805278
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353228,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353228
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6452830188679245,
"acc_stderr": 0.02944517532819959,
"acc_norm": 0.6452830188679245,
"acc_norm_stderr": 0.02944517532819959
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.037038511930995215,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.037038511930995215
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.04755129616062946,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.04755129616062946
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.03267151848924777,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.03267151848924777
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.046854730419077895,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.046854730419077895
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.02516798233389414,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.02516798233389414
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768177,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768177
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.02489246917246283,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.02489246917246283
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7525252525252525,
"acc_stderr": 0.030746300742124495,
"acc_norm": 0.7525252525252525,
"acc_norm_stderr": 0.030746300742124495
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.02293514405391945,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.02293514405391945
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6076923076923076,
"acc_stderr": 0.024756000382130956,
"acc_norm": 0.6076923076923076,
"acc_norm_stderr": 0.024756000382130956
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.02822644674968351,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.02822644674968351
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8275229357798165,
"acc_stderr": 0.01619780795684803,
"acc_norm": 0.8275229357798165,
"acc_norm_stderr": 0.01619780795684803
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.02765215314415927,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.02765215314415927
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908706,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908706
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615623,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615623
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7994891443167306,
"acc_stderr": 0.014317653708594209,
"acc_norm": 0.7994891443167306,
"acc_norm_stderr": 0.014317653708594209
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.02494679222527231,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.02494679222527231
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39888268156424583,
"acc_stderr": 0.01637696614261008,
"acc_norm": 0.39888268156424583,
"acc_norm_stderr": 0.01637696614261008
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.02564686309713791,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.02564686309713791
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6655948553054662,
"acc_stderr": 0.02679542232789394,
"acc_norm": 0.6655948553054662,
"acc_norm_stderr": 0.02679542232789394
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6975308641975309,
"acc_stderr": 0.025557653981868062,
"acc_norm": 0.6975308641975309,
"acc_norm_stderr": 0.025557653981868062
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4397163120567376,
"acc_stderr": 0.029609912075594113,
"acc_norm": 0.4397163120567376,
"acc_norm_stderr": 0.029609912075594113
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45697522816166886,
"acc_stderr": 0.012722869501611419,
"acc_norm": 0.45697522816166886,
"acc_norm_stderr": 0.012722869501611419
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6397058823529411,
"acc_stderr": 0.029163128570670733,
"acc_norm": 0.6397058823529411,
"acc_norm_stderr": 0.029163128570670733
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6454248366013072,
"acc_stderr": 0.019353360547553707,
"acc_norm": 0.6454248366013072,
"acc_norm_stderr": 0.019353360547553707
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.04582004841505416,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.04582004841505416
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4810281517747858,
"mc1_stderr": 0.017490896405762346,
"mc2": 0.6779864188083103,
"mc2_stderr": 0.015438913814674077
},
"harness|winogrande|5": {
"acc": 0.8279400157853196,
"acc_stderr": 0.010607731615247012
},
"harness|gsm8k|5": {
"acc": 0.6239575435936315,
"acc_stderr": 0.013342532064849779
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_rufjdk5480__WestLake-dpo-train-sft-v1 | [
"region:us"
] | 2024-02-01T14:22:13+00:00 | {"pretty_name": "Evaluation run of rufjdk5480/WestLake-dpo-train-sft-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [rufjdk5480/WestLake-dpo-train-sft-v1](https://huggingface.co/rufjdk5480/WestLake-dpo-train-sft-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rufjdk5480__WestLake-dpo-train-sft-v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T14:19:45.427052](https://huggingface.co/datasets/open-llm-leaderboard/details_rufjdk5480__WestLake-dpo-train-sft-v1/blob/main/results_2024-02-01T14-19-45.427052.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6228074117363218,\n \"acc_stderr\": 0.03277947302632996,\n \"acc_norm\": 0.622716710184122,\n \"acc_norm_stderr\": 0.03346104331544058,\n \"mc1\": 0.4810281517747858,\n \"mc1_stderr\": 0.017490896405762346,\n \"mc2\": 0.6779864188083103,\n \"mc2_stderr\": 0.015438913814674077\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6296928327645052,\n \"acc_stderr\": 0.01411129875167495,\n \"acc_norm\": 0.6578498293515358,\n \"acc_norm_stderr\": 0.013864152159177275\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6848237402907787,\n \"acc_stderr\": 0.004636365534819763,\n \"acc_norm\": 0.8575980880302728,\n \"acc_norm_stderr\": 0.0034874768122805278\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353228,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353228\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6452830188679245,\n \"acc_stderr\": 0.02944517532819959,\n \"acc_norm\": 0.6452830188679245,\n \"acc_norm_stderr\": 0.02944517532819959\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.037038511930995215,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.037038511930995215\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062946,\n \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062946\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.03267151848924777,\n \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.03267151848924777\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.046854730419077895,\n \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.046854730419077895\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.02516798233389414,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.02516798233389414\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.04403438954768177,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.04403438954768177\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n \"acc_stderr\": 0.02489246917246283,\n \"acc_norm\": 0.7419354838709677,\n \"acc_norm_stderr\": 0.02489246917246283\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7525252525252525,\n \"acc_stderr\": 0.030746300742124495,\n \"acc_norm\": 0.7525252525252525,\n \"acc_norm_stderr\": 0.030746300742124495\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.02293514405391945,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.02293514405391945\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.024756000382130956,\n \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.024756000382130956\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3111111111111111,\n \"acc_stderr\": 0.02822644674968351,\n \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.02822644674968351\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8275229357798165,\n \"acc_stderr\": 0.01619780795684803,\n \"acc_norm\": 0.8275229357798165,\n \"acc_norm_stderr\": 0.01619780795684803\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.02765215314415927,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.02765215314415927\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908706,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908706\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7994891443167306,\n \"acc_stderr\": 0.014317653708594209,\n \"acc_norm\": 0.7994891443167306,\n \"acc_norm_stderr\": 0.014317653708594209\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.02494679222527231,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.02494679222527231\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39888268156424583,\n \"acc_stderr\": 0.01637696614261008,\n \"acc_norm\": 0.39888268156424583,\n \"acc_norm_stderr\": 0.01637696614261008\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.02564686309713791,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.02564686309713791\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n \"acc_stderr\": 0.02679542232789394,\n \"acc_norm\": 0.6655948553054662,\n \"acc_norm_stderr\": 0.02679542232789394\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6975308641975309,\n \"acc_stderr\": 0.025557653981868062,\n \"acc_norm\": 0.6975308641975309,\n \"acc_norm_stderr\": 0.025557653981868062\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4397163120567376,\n \"acc_stderr\": 0.029609912075594113,\n \"acc_norm\": 0.4397163120567376,\n \"acc_norm_stderr\": 0.029609912075594113\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45697522816166886,\n \"acc_stderr\": 0.012722869501611419,\n \"acc_norm\": 0.45697522816166886,\n \"acc_norm_stderr\": 0.012722869501611419\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6397058823529411,\n \"acc_stderr\": 0.029163128570670733,\n \"acc_norm\": 0.6397058823529411,\n \"acc_norm_stderr\": 0.029163128570670733\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6454248366013072,\n \"acc_stderr\": 0.019353360547553707,\n \"acc_norm\": 0.6454248366013072,\n \"acc_norm_stderr\": 0.019353360547553707\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.04582004841505416,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.04582004841505416\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4810281517747858,\n \"mc1_stderr\": 0.017490896405762346,\n \"mc2\": 0.6779864188083103,\n \"mc2_stderr\": 0.015438913814674077\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8279400157853196,\n \"acc_stderr\": 0.010607731615247012\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6239575435936315,\n \"acc_stderr\": 0.013342532064849779\n }\n}\n```", "repo_url": "https://huggingface.co/rufjdk5480/WestLake-dpo-train-sft-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|arc:challenge|25_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|gsm8k|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hellaswag|10_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T14-19-45.427052.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["**/details_harness|winogrande|5_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T14-19-45.427052.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T14_19_45.427052", "path": ["results_2024-02-01T14-19-45.427052.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T14-19-45.427052.parquet"]}]}]} | 2024-02-01T14:22:35+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of rufjdk5480/WestLake-dpo-train-sft-v1
Dataset automatically created during the evaluation run of model rufjdk5480/WestLake-dpo-train-sft-v1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T14:19:45.427052(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of rufjdk5480/WestLake-dpo-train-sft-v1\n\n\n\nDataset automatically created during the evaluation run of model rufjdk5480/WestLake-dpo-train-sft-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T14:19:45.427052(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rufjdk5480/WestLake-dpo-train-sft-v1\n\n\n\nDataset automatically created during the evaluation run of model rufjdk5480/WestLake-dpo-train-sft-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T14:19:45.427052(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
c87b0d2230fe28cd8089a2f2eca8a319935298b6 |
# Dataset Card for Evaluation run of Technoculture/MT7Bi-sft
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Technoculture/MT7Bi-sft](https://huggingface.co/Technoculture/MT7Bi-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Technoculture__MT7Bi-sft",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T14:25:40.116952](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__MT7Bi-sft/blob/main/results_2024-02-01T14-25-40.116952.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4105658357592459,
"acc_stderr": 0.03434113134801399,
"acc_norm": 0.416672739687421,
"acc_norm_stderr": 0.03527569703844115,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608767,
"mc2": 0.4460516469060258,
"mc2_stderr": 0.01603355318388596
},
"harness|arc:challenge|25": {
"acc": 0.3796928327645051,
"acc_stderr": 0.014182119866974874,
"acc_norm": 0.4180887372013652,
"acc_norm_stderr": 0.014413988396996074
},
"harness|hellaswag|10": {
"acc": 0.4629555865365465,
"acc_stderr": 0.004976067726432563,
"acc_norm": 0.5683130850428202,
"acc_norm_stderr": 0.004942990623131126
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.043163785995113245,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.043163785995113245
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.40131578947368424,
"acc_stderr": 0.039889037033362836,
"acc_norm": 0.40131578947368424,
"acc_norm_stderr": 0.039889037033362836
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5018867924528302,
"acc_stderr": 0.03077265364207565,
"acc_norm": 0.5018867924528302,
"acc_norm_stderr": 0.03077265364207565
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4236111111111111,
"acc_stderr": 0.04132125019723369,
"acc_norm": 0.4236111111111111,
"acc_norm_stderr": 0.04132125019723369
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.41040462427745666,
"acc_stderr": 0.03750757044895537,
"acc_norm": 0.41040462427745666,
"acc_norm_stderr": 0.03750757044895537
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929774,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929774
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3276595744680851,
"acc_stderr": 0.030683020843231004,
"acc_norm": 0.3276595744680851,
"acc_norm_stderr": 0.030683020843231004
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2982456140350877,
"acc_stderr": 0.043036840335373146,
"acc_norm": 0.2982456140350877,
"acc_norm_stderr": 0.043036840335373146
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.45517241379310347,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.45517241379310347,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2751322751322751,
"acc_stderr": 0.02300008685906864,
"acc_norm": 0.2751322751322751,
"acc_norm_stderr": 0.02300008685906864
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.25396825396825395,
"acc_stderr": 0.03893259610604674,
"acc_norm": 0.25396825396825395,
"acc_norm_stderr": 0.03893259610604674
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4935483870967742,
"acc_stderr": 0.02844163823354051,
"acc_norm": 0.4935483870967742,
"acc_norm_stderr": 0.02844163823354051
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.26108374384236455,
"acc_stderr": 0.0309037969521145,
"acc_norm": 0.26108374384236455,
"acc_norm_stderr": 0.0309037969521145
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.47878787878787876,
"acc_stderr": 0.039008289137373,
"acc_norm": 0.47878787878787876,
"acc_norm_stderr": 0.039008289137373
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.4797979797979798,
"acc_stderr": 0.03559443565563921,
"acc_norm": 0.4797979797979798,
"acc_norm_stderr": 0.03559443565563921
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.47668393782383417,
"acc_stderr": 0.03604513672442206,
"acc_norm": 0.47668393782383417,
"acc_norm_stderr": 0.03604513672442206
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3435897435897436,
"acc_stderr": 0.024078696580635484,
"acc_norm": 0.3435897435897436,
"acc_norm_stderr": 0.024078696580635484
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2851851851851852,
"acc_stderr": 0.027528599210340496,
"acc_norm": 0.2851851851851852,
"acc_norm_stderr": 0.027528599210340496
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3697478991596639,
"acc_stderr": 0.031357095996135904,
"acc_norm": 0.3697478991596639,
"acc_norm_stderr": 0.031357095996135904
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.037579499229433426,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.037579499229433426
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5522935779816514,
"acc_stderr": 0.021319754962425462,
"acc_norm": 0.5522935779816514,
"acc_norm_stderr": 0.021319754962425462
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.35648148148148145,
"acc_stderr": 0.032664783315272714,
"acc_norm": 0.35648148148148145,
"acc_norm_stderr": 0.032664783315272714
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.37745098039215685,
"acc_stderr": 0.03402272044340703,
"acc_norm": 0.37745098039215685,
"acc_norm_stderr": 0.03402272044340703
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.5780590717299579,
"acc_stderr": 0.032148146302403695,
"acc_norm": 0.5780590717299579,
"acc_norm_stderr": 0.032148146302403695
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.47533632286995514,
"acc_stderr": 0.033516951676526276,
"acc_norm": 0.47533632286995514,
"acc_norm_stderr": 0.033516951676526276
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5343511450381679,
"acc_stderr": 0.043749285605997376,
"acc_norm": 0.5343511450381679,
"acc_norm_stderr": 0.043749285605997376
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5867768595041323,
"acc_stderr": 0.04495087843548408,
"acc_norm": 0.5867768595041323,
"acc_norm_stderr": 0.04495087843548408
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.04820403072760628,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.04820403072760628
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4294478527607362,
"acc_stderr": 0.038890666191127216,
"acc_norm": 0.4294478527607362,
"acc_norm_stderr": 0.038890666191127216
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.21428571428571427,
"acc_stderr": 0.038946411200447915,
"acc_norm": 0.21428571428571427,
"acc_norm_stderr": 0.038946411200447915
},
"harness|hendrycksTest-management|5": {
"acc": 0.5048543689320388,
"acc_stderr": 0.04950504382128921,
"acc_norm": 0.5048543689320388,
"acc_norm_stderr": 0.04950504382128921
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6025641025641025,
"acc_stderr": 0.03205953453789293,
"acc_norm": 0.6025641025641025,
"acc_norm_stderr": 0.03205953453789293
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.4661558109833972,
"acc_stderr": 0.017838956009136805,
"acc_norm": 0.4661558109833972,
"acc_norm_stderr": 0.017838956009136805
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.44508670520231214,
"acc_stderr": 0.02675625512966377,
"acc_norm": 0.44508670520231214,
"acc_norm_stderr": 0.02675625512966377
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2335195530726257,
"acc_stderr": 0.014149575348976259,
"acc_norm": 0.2335195530726257,
"acc_norm_stderr": 0.014149575348976259
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4869281045751634,
"acc_stderr": 0.028620130800700246,
"acc_norm": 0.4869281045751634,
"acc_norm_stderr": 0.028620130800700246
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4437299035369775,
"acc_stderr": 0.028217683556652308,
"acc_norm": 0.4437299035369775,
"acc_norm_stderr": 0.028217683556652308
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.42901234567901236,
"acc_stderr": 0.027538925613470863,
"acc_norm": 0.42901234567901236,
"acc_norm_stderr": 0.027538925613470863
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.33687943262411346,
"acc_stderr": 0.028195534873966734,
"acc_norm": 0.33687943262411346,
"acc_norm_stderr": 0.028195534873966734
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3057366362451108,
"acc_stderr": 0.011766973847072914,
"acc_norm": 0.3057366362451108,
"acc_norm_stderr": 0.011766973847072914
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4227941176470588,
"acc_stderr": 0.030008562845003486,
"acc_norm": 0.4227941176470588,
"acc_norm_stderr": 0.030008562845003486
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.39705882352941174,
"acc_stderr": 0.01979448890002411,
"acc_norm": 0.39705882352941174,
"acc_norm_stderr": 0.01979448890002411
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.43636363636363634,
"acc_stderr": 0.04750185058907297,
"acc_norm": 0.43636363636363634,
"acc_norm_stderr": 0.04750185058907297
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.49795918367346936,
"acc_stderr": 0.0320089533497105,
"acc_norm": 0.49795918367346936,
"acc_norm_stderr": 0.0320089533497105
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5472636815920398,
"acc_stderr": 0.03519702717576915,
"acc_norm": 0.5472636815920398,
"acc_norm_stderr": 0.03519702717576915
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562427,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562427
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3855421686746988,
"acc_stderr": 0.037891344246115496,
"acc_norm": 0.3855421686746988,
"acc_norm_stderr": 0.037891344246115496
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.03834234744164993,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.03834234744164993
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608767,
"mc2": 0.4460516469060258,
"mc2_stderr": 0.01603355318388596
},
"harness|winogrande|5": {
"acc": 0.6045777426992897,
"acc_stderr": 0.013741678387545352
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Technoculture__MT7Bi-sft | [
"region:us"
] | 2024-02-01T14:28:04+00:00 | {"pretty_name": "Evaluation run of Technoculture/MT7Bi-sft", "dataset_summary": "Dataset automatically created during the evaluation run of model [Technoculture/MT7Bi-sft](https://huggingface.co/Technoculture/MT7Bi-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Technoculture__MT7Bi-sft\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T14:25:40.116952](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__MT7Bi-sft/blob/main/results_2024-02-01T14-25-40.116952.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4105658357592459,\n \"acc_stderr\": 0.03434113134801399,\n \"acc_norm\": 0.416672739687421,\n \"acc_norm_stderr\": 0.03527569703844115,\n \"mc1\": 0.2802937576499388,\n \"mc1_stderr\": 0.015723139524608767,\n \"mc2\": 0.4460516469060258,\n \"mc2_stderr\": 0.01603355318388596\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3796928327645051,\n \"acc_stderr\": 0.014182119866974874,\n \"acc_norm\": 0.4180887372013652,\n \"acc_norm_stderr\": 0.014413988396996074\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4629555865365465,\n \"acc_stderr\": 0.004976067726432563,\n \"acc_norm\": 0.5683130850428202,\n \"acc_norm_stderr\": 0.004942990623131126\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.40131578947368424,\n \"acc_stderr\": 0.039889037033362836,\n \"acc_norm\": 0.40131578947368424,\n \"acc_norm_stderr\": 0.039889037033362836\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5018867924528302,\n \"acc_stderr\": 0.03077265364207565,\n \"acc_norm\": 0.5018867924528302,\n \"acc_norm_stderr\": 0.03077265364207565\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4236111111111111,\n \"acc_stderr\": 0.04132125019723369,\n \"acc_norm\": 0.4236111111111111,\n \"acc_norm_stderr\": 0.04132125019723369\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.41040462427745666,\n \"acc_stderr\": 0.03750757044895537,\n \"acc_norm\": 0.41040462427745666,\n \"acc_norm_stderr\": 0.03750757044895537\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929774,\n \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929774\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.3276595744680851,\n \"acc_stderr\": 0.030683020843231004,\n \"acc_norm\": 0.3276595744680851,\n \"acc_norm_stderr\": 0.030683020843231004\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2982456140350877,\n \"acc_stderr\": 0.043036840335373146,\n \"acc_norm\": 0.2982456140350877,\n \"acc_norm_stderr\": 0.043036840335373146\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.45517241379310347,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.45517241379310347,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2751322751322751,\n \"acc_stderr\": 0.02300008685906864,\n \"acc_norm\": 0.2751322751322751,\n \"acc_norm_stderr\": 0.02300008685906864\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.25396825396825395,\n \"acc_stderr\": 0.03893259610604674,\n \"acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.03893259610604674\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4935483870967742,\n \"acc_stderr\": 0.02844163823354051,\n \"acc_norm\": 0.4935483870967742,\n \"acc_norm_stderr\": 0.02844163823354051\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.26108374384236455,\n \"acc_stderr\": 0.0309037969521145,\n \"acc_norm\": 0.26108374384236455,\n \"acc_norm_stderr\": 0.0309037969521145\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.47878787878787876,\n \"acc_stderr\": 0.039008289137373,\n \"acc_norm\": 0.47878787878787876,\n \"acc_norm_stderr\": 0.039008289137373\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.4797979797979798,\n \"acc_stderr\": 0.03559443565563921,\n \"acc_norm\": 0.4797979797979798,\n \"acc_norm_stderr\": 0.03559443565563921\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.47668393782383417,\n \"acc_stderr\": 0.03604513672442206,\n \"acc_norm\": 0.47668393782383417,\n \"acc_norm_stderr\": 0.03604513672442206\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3435897435897436,\n \"acc_stderr\": 0.024078696580635484,\n \"acc_norm\": 0.3435897435897436,\n \"acc_norm_stderr\": 0.024078696580635484\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340496,\n \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340496\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3697478991596639,\n \"acc_stderr\": 0.031357095996135904,\n \"acc_norm\": 0.3697478991596639,\n \"acc_norm_stderr\": 0.031357095996135904\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.304635761589404,\n \"acc_stderr\": 0.037579499229433426,\n \"acc_norm\": 0.304635761589404,\n \"acc_norm_stderr\": 0.037579499229433426\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5522935779816514,\n \"acc_stderr\": 0.021319754962425462,\n \"acc_norm\": 0.5522935779816514,\n \"acc_norm_stderr\": 0.021319754962425462\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.35648148148148145,\n \"acc_stderr\": 0.032664783315272714,\n \"acc_norm\": 0.35648148148148145,\n \"acc_norm_stderr\": 0.032664783315272714\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.37745098039215685,\n \"acc_stderr\": 0.03402272044340703,\n \"acc_norm\": 0.37745098039215685,\n \"acc_norm_stderr\": 0.03402272044340703\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.5780590717299579,\n \"acc_stderr\": 0.032148146302403695,\n \"acc_norm\": 0.5780590717299579,\n \"acc_norm_stderr\": 0.032148146302403695\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.47533632286995514,\n \"acc_stderr\": 0.033516951676526276,\n \"acc_norm\": 0.47533632286995514,\n \"acc_norm_stderr\": 0.033516951676526276\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5343511450381679,\n \"acc_stderr\": 0.043749285605997376,\n \"acc_norm\": 0.5343511450381679,\n \"acc_norm_stderr\": 0.043749285605997376\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5867768595041323,\n \"acc_stderr\": 0.04495087843548408,\n \"acc_norm\": 0.5867768595041323,\n \"acc_norm_stderr\": 0.04495087843548408\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.04820403072760628,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.04820403072760628\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.4294478527607362,\n \"acc_stderr\": 0.038890666191127216,\n \"acc_norm\": 0.4294478527607362,\n \"acc_norm_stderr\": 0.038890666191127216\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.21428571428571427,\n \"acc_stderr\": 0.038946411200447915,\n \"acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.038946411200447915\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.5048543689320388,\n \"acc_stderr\": 0.04950504382128921,\n \"acc_norm\": 0.5048543689320388,\n \"acc_norm_stderr\": 0.04950504382128921\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6025641025641025,\n \"acc_stderr\": 0.03205953453789293,\n \"acc_norm\": 0.6025641025641025,\n \"acc_norm_stderr\": 0.03205953453789293\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.4661558109833972,\n \"acc_stderr\": 0.017838956009136805,\n \"acc_norm\": 0.4661558109833972,\n \"acc_norm_stderr\": 0.017838956009136805\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.44508670520231214,\n \"acc_stderr\": 0.02675625512966377,\n \"acc_norm\": 0.44508670520231214,\n \"acc_norm_stderr\": 0.02675625512966377\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2335195530726257,\n \"acc_stderr\": 0.014149575348976259,\n \"acc_norm\": 0.2335195530726257,\n \"acc_norm_stderr\": 0.014149575348976259\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4869281045751634,\n \"acc_stderr\": 0.028620130800700246,\n \"acc_norm\": 0.4869281045751634,\n \"acc_norm_stderr\": 0.028620130800700246\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4437299035369775,\n \"acc_stderr\": 0.028217683556652308,\n \"acc_norm\": 0.4437299035369775,\n \"acc_norm_stderr\": 0.028217683556652308\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.42901234567901236,\n \"acc_stderr\": 0.027538925613470863,\n \"acc_norm\": 0.42901234567901236,\n \"acc_norm_stderr\": 0.027538925613470863\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.33687943262411346,\n \"acc_stderr\": 0.028195534873966734,\n \"acc_norm\": 0.33687943262411346,\n \"acc_norm_stderr\": 0.028195534873966734\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3057366362451108,\n \"acc_stderr\": 0.011766973847072914,\n \"acc_norm\": 0.3057366362451108,\n \"acc_norm_stderr\": 0.011766973847072914\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4227941176470588,\n \"acc_stderr\": 0.030008562845003486,\n \"acc_norm\": 0.4227941176470588,\n \"acc_norm_stderr\": 0.030008562845003486\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.39705882352941174,\n \"acc_stderr\": 0.01979448890002411,\n \"acc_norm\": 0.39705882352941174,\n \"acc_norm_stderr\": 0.01979448890002411\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.43636363636363634,\n \"acc_stderr\": 0.04750185058907297,\n \"acc_norm\": 0.43636363636363634,\n \"acc_norm_stderr\": 0.04750185058907297\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.49795918367346936,\n \"acc_stderr\": 0.0320089533497105,\n \"acc_norm\": 0.49795918367346936,\n \"acc_norm_stderr\": 0.0320089533497105\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5472636815920398,\n \"acc_stderr\": 0.03519702717576915,\n \"acc_norm\": 0.5472636815920398,\n \"acc_norm_stderr\": 0.03519702717576915\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562427,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562427\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3855421686746988,\n \"acc_stderr\": 0.037891344246115496,\n \"acc_norm\": 0.3855421686746988,\n \"acc_norm_stderr\": 0.037891344246115496\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.03834234744164993,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.03834234744164993\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n \"mc1_stderr\": 0.015723139524608767,\n \"mc2\": 0.4460516469060258,\n \"mc2_stderr\": 0.01603355318388596\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6045777426992897,\n \"acc_stderr\": 0.013741678387545352\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/Technoculture/MT7Bi-sft", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|arc:challenge|25_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|gsm8k|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hellaswag|10_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T14-25-40.116952.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["**/details_harness|winogrande|5_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T14-25-40.116952.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T14_25_40.116952", "path": ["results_2024-02-01T14-25-40.116952.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T14-25-40.116952.parquet"]}]}]} | 2024-02-01T14:28:27+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Technoculture/MT7Bi-sft
Dataset automatically created during the evaluation run of model Technoculture/MT7Bi-sft on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T14:25:40.116952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Technoculture/MT7Bi-sft\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/MT7Bi-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T14:25:40.116952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Technoculture/MT7Bi-sft\n\n\n\nDataset automatically created during the evaluation run of model Technoculture/MT7Bi-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T14:25:40.116952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
3324210c6231d2a07af81a0d9a69bec38872cb48 |
- 36.528 English texts in total, 12.955 NOT offensive and 23.573O OFFENSIVE texts
- All duplicate values were removed
- Split using sklearn into 80% train and 20% temporary test (stratified label). Then split the test set using 0.50% test and validation (stratified label)
- Split: 80/10/10
- Train set label distribution: 0 ==> 10.364, 1 ==> 18.858
- Validation set label distribution: 0 ==> 1.296, 1 ==> 2.357
- Test set label distribution: 0 ==> 1.295, 1 ==> 2.358
- The OLID dataset (Zampieri et al., 2019) and the labels "Offensive" and "Neither" from the paper's dataset "Automated Hate Speech Detection and the Problem of Offensive Language" (Davidson et al.,2017) | christinacdl/offensive_language_dataset | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2024-02-01T14:47:32+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-classification"]} | 2024-02-01T14:59:20+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #license-apache-2.0 #region-us
|
- 36.528 English texts in total, 12.955 NOT offensive and 23.573O OFFENSIVE texts
- All duplicate values were removed
- Split using sklearn into 80% train and 20% temporary test (stratified label). Then split the test set using 0.50% test and validation (stratified label)
- Split: 80/10/10
- Train set label distribution: 0 ==> 10.364, 1 ==> 18.858
- Validation set label distribution: 0 ==> 1.296, 1 ==> 2.357
- Test set label distribution: 0 ==> 1.295, 1 ==> 2.358
- The OLID dataset (Zampieri et al., 2019) and the labels "Offensive" and "Neither" from the paper's dataset "Automated Hate Speech Detection and the Problem of Offensive Language" (Davidson et al.,2017) | [] | [
"TAGS\n#task_categories-text-classification #language-English #license-apache-2.0 #region-us \n"
] |
c027163c9926e33f5528994fe8bf4556641ff92b | # Dataset Card for "fluent_speech_commands_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/fluent_speech_commands_unit | [
"region:us"
] | 2024-02-01T15:29:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k_12bps", "path": "data/encodec_24k_12bps-*"}, {"split": "encodec_24k_1_5bps", "path": "data/encodec_24k_1_5bps-*"}, {"split": "encodec_24k_24bps", "path": "data/encodec_24k_24bps-*"}, {"split": "encodec_24k_3bps", "path": "data/encodec_24k_3bps-*"}, {"split": "encodec_24k_6bps", "path": "data/encodec_24k_6bps-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 113230526, "num_examples": 30043}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 113230526, "num_examples": 30043}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 168731838, "num_examples": 30043}, {"name": "audiodec_24k_320d", "num_bytes": 358989102, "num_examples": 30043}, {"name": "dac_16k", "num_bytes": 339147198, "num_examples": 30043}, {"name": "dac_24k", "num_bytes": 1346105166, "num_examples": 30043}, {"name": "dac_44k", "num_bytes": 436126386, "num_examples": 30043}, {"name": "encodec_24k_12bps", "num_bytes": 671261198, "num_examples": 30043}, {"name": "encodec_24k_1_5bps", "num_bytes": 85988566, "num_examples": 30043}, {"name": "encodec_24k_24bps", "num_bytes": 1340144206, "num_examples": 30043}, {"name": "encodec_24k_3bps", "num_bytes": 169598942, "num_examples": 30043}, {"name": "encodec_24k_6bps", "num_bytes": 336819694, "num_examples": 30043}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 896887886, "num_examples": 30043}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 896887886, "num_examples": 30043}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 896513102, "num_examples": 30043}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 453298510, "num_examples": 30043}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 896513102, "num_examples": 30043}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 453298510, "num_examples": 30043}, {"name": "speech_tokenizer_16k", "num_bytes": 225911918, "num_examples": 30043}], "download_size": 1566492443, "dataset_size": 10198684262}} | 2024-02-01T15:32:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fluent_speech_commands_unit"
More Information needed | [
"# Dataset Card for \"fluent_speech_commands_unit\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fluent_speech_commands_unit\"\n\nMore Information needed"
] |
58a8041cccc2cfb88a787dd3f0cd42ea9c21ec83 |
For the shared task [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html), we provide a compilation of binary toxicity classification datasets for each language.
Namely, for each language, we provide 5k subparts of the datasets -- 2.5k toxic and 2.5k non-toxic samples.
The list of original sources:
* English: [Jigsaw](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Unitary AI Toxicity Dataset](https://github.com/unitaryai/detoxify)
* Russian: [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments), [Toxic Russian Comments](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)
* Ukrainian: our labeling based on [Ukrainian Twitter texts](https://github.com/saganoren/ukr-twi-corpus)
* Spanish: [CLANDESTINO, the Spanish toxic language dataset](https://github.com/microsoft/Clandestino/tree/main)
* German: [DeTox-Dataset](https://github.com/hdaSprachtechnologie/detox), [GemEval 2018, 2021](https://aclanthology.org/2021.germeval-1.1/)
* Amhairc: [Amharic Hate Speech](https://github.com/uhh-lt/AmharicHateSpeech)
* Arabic: [OSACT4](https://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/)
* Hindi: [Hostility Detection Dataset in Hindi](https://competitions.codalab.org/competitions/26654#learn_the_details-dataset), [Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages](https://dl.acm.org/doi/pdf/10.1145/3368567.3368584?download=true)
All credits go to the authors of the original toxic words lists.
| textdetox/multilingual_toxicity_dataset | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"language:ru",
"language:uk",
"language:de",
"language:es",
"language:am",
"language:zh",
"language:ar",
"language:hi",
"license:openrail++",
"region:us"
] | 2024-02-01T15:44:46+00:00 | {"language": ["en", "ru", "uk", "de", "es", "am", "zh", "ar", "hi"], "license": "openrail++", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "toxic", "dtype": "int64"}], "splits": [{"name": "en", "num_bytes": 411178, "num_examples": 5000}, {"name": "ru", "num_bytes": 710001, "num_examples": 5000}, {"name": "uk", "num_bytes": 630595, "num_examples": 5000}, {"name": "de", "num_bytes": 941017, "num_examples": 5000}, {"name": "es", "num_bytes": 978750, "num_examples": 5000}, {"name": "am", "num_bytes": 1102628, "num_examples": 5000}, {"name": "zh", "num_bytes": 359235, "num_examples": 5000}, {"name": "ar", "num_bytes": 889661, "num_examples": 5000}, {"name": "hi", "num_bytes": 1842662, "num_examples": 5000}], "download_size": 4470012, "dataset_size": 7865727}, "configs": [{"config_name": "default", "data_files": [{"split": "en", "path": "data/en-*"}, {"split": "ru", "path": "data/ru-*"}, {"split": "uk", "path": "data/uk-*"}, {"split": "de", "path": "data/de-*"}, {"split": "es", "path": "data/es-*"}, {"split": "am", "path": "data/am-*"}, {"split": "zh", "path": "data/zh-*"}, {"split": "ar", "path": "data/ar-*"}, {"split": "hi", "path": "data/hi-*"}]}]} | 2024-02-14T09:30:13+00:00 | [] | [
"en",
"ru",
"uk",
"de",
"es",
"am",
"zh",
"ar",
"hi"
] | TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #language-Russian #language-Ukrainian #language-German #language-Spanish #language-Amharic #language-Chinese #language-Arabic #language-Hindi #license-openrail++ #region-us
|
For the shared task CLEF TextDetox 2024, we provide a compilation of binary toxicity classification datasets for each language.
Namely, for each language, we provide 5k subparts of the datasets -- 2.5k toxic and 2.5k non-toxic samples.
The list of original sources:
* English: Jigsaw, Unitary AI Toxicity Dataset
* Russian: Russian Language Toxic Comments, Toxic Russian Comments
* Ukrainian: our labeling based on Ukrainian Twitter texts
* Spanish: CLANDESTINO, the Spanish toxic language dataset
* German: DeTox-Dataset, GemEval 2018, 2021
* Amhairc: Amharic Hate Speech
* Arabic: OSACT4
* Hindi: Hostility Detection Dataset in Hindi, Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages
All credits go to the authors of the original toxic words lists.
| [] | [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #language-Russian #language-Ukrainian #language-German #language-Spanish #language-Amharic #language-Chinese #language-Arabic #language-Hindi #license-openrail++ #region-us \n"
] |
bfcb99ff514be4dcf6d2de01f27ef49b3e4561aa | # Dataset Card for "lmind_nq_train600_eval300_v1_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train600_eval300_v1_qa | [
"region:us"
] | 2024-02-01T15:49:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 68720, "num_examples": 600}, {"name": "train_recite_qa", "num_bytes": 453011, "num_examples": 600}, {"name": "eval_qa", "num_bytes": 35277, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 226920, "num_examples": 300}, {"name": "all_docs", "num_bytes": 574063, "num_examples": 883}, {"name": "all_docs_eval", "num_bytes": 573998, "num_examples": 883}, {"name": "train", "num_bytes": 68720, "num_examples": 600}, {"name": "validation", "num_bytes": 35277, "num_examples": 300}], "download_size": 1292475, "dataset_size": 2035986}} | 2024-02-01T15:49:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train600_eval300_v1_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train600_eval300_v1_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train600_eval300_v1_qa\"\n\nMore Information needed"
] |
2b56502bbf7993b828cb520011d3c918f420bfaa | # Dataset Card for "lmind_nq_train600_eval300_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train600_eval300_v1_doc | [
"region:us"
] | 2024-02-01T15:49:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 68720, "num_examples": 600}, {"name": "train_recite_qa", "num_bytes": 453011, "num_examples": 600}, {"name": "eval_qa", "num_bytes": 35277, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 226920, "num_examples": 300}, {"name": "all_docs", "num_bytes": 574063, "num_examples": 883}, {"name": "all_docs_eval", "num_bytes": 573998, "num_examples": 883}, {"name": "train", "num_bytes": 574063, "num_examples": 883}, {"name": "validation", "num_bytes": 574063, "num_examples": 883}], "download_size": 1948274, "dataset_size": 3080115}} | 2024-02-01T15:49:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train600_eval300_v1_doc"
More Information needed | [
"# Dataset Card for \"lmind_nq_train600_eval300_v1_doc\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train600_eval300_v1_doc\"\n\nMore Information needed"
] |
ebce28ea3b29647a6a2230902817a43c57cf3770 | # Dataset Card for "lmind_hotpot_train500_eval300_v1_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train500_eval300_v1_qa | [
"region:us"
] | 2024-02-01T15:49:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 84812, "num_examples": 500}, {"name": "train_recite_qa", "num_bytes": 525773, "num_examples": 500}, {"name": "eval_qa", "num_bytes": 49916, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 324839, "num_examples": 300}, {"name": "all_docs", "num_bytes": 738612, "num_examples": 1594}, {"name": "all_docs_eval", "num_bytes": 738503, "num_examples": 1594}, {"name": "train", "num_bytes": 84812, "num_examples": 500}, {"name": "validation", "num_bytes": 49916, "num_examples": 300}], "download_size": 1623187, "dataset_size": 2597183}} | 2024-02-01T15:50:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train500_eval300_v1_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_qa\"\n\nMore Information needed"
] |
13a194161f707e11f590252a35025e04f9e1a8aa | # Dataset Card for "lmind_nq_train600_eval300_v1_docidx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train600_eval300_v1_docidx | [
"region:us"
] | 2024-02-01T15:49:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 68720, "num_examples": 600}, {"name": "train_recite_qa", "num_bytes": 453011, "num_examples": 600}, {"name": "eval_qa", "num_bytes": 35277, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 226920, "num_examples": 300}, {"name": "all_docs", "num_bytes": 574063, "num_examples": 883}, {"name": "all_docs_eval", "num_bytes": 573998, "num_examples": 883}, {"name": "train", "num_bytes": 574063, "num_examples": 883}, {"name": "validation", "num_bytes": 573998, "num_examples": 883}], "download_size": 1954964, "dataset_size": 3080050}} | 2024-02-01T15:50:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train600_eval300_v1_docidx"
More Information needed | [
"# Dataset Card for \"lmind_nq_train600_eval300_v1_docidx\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train600_eval300_v1_docidx\"\n\nMore Information needed"
] |
91d86840cd16bb7b8099170ddc8122373f088f97 | # Dataset Card for "lmind_hotpot_train500_eval300_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train500_eval300_v1_doc | [
"region:us"
] | 2024-02-01T15:50:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 84812, "num_examples": 500}, {"name": "train_recite_qa", "num_bytes": 525773, "num_examples": 500}, {"name": "eval_qa", "num_bytes": 49916, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 324839, "num_examples": 300}, {"name": "all_docs", "num_bytes": 738612, "num_examples": 1594}, {"name": "all_docs_eval", "num_bytes": 738503, "num_examples": 1594}, {"name": "train", "num_bytes": 738612, "num_examples": 1594}, {"name": "validation", "num_bytes": 738612, "num_examples": 1594}], "download_size": 2429329, "dataset_size": 3939679}} | 2024-02-01T15:50:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train500_eval300_v1_doc"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_doc\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_doc\"\n\nMore Information needed"
] |
fe3e5c85909e68de129b7d6ee020b3a10c6a8292 | # Dataset Card for "lmind_nq_train600_eval300_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train600_eval300_v1_doc_qa | [
"region:us"
] | 2024-02-01T15:50:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 68720, "num_examples": 600}, {"name": "train_recite_qa", "num_bytes": 453011, "num_examples": 600}, {"name": "eval_qa", "num_bytes": 35277, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 226920, "num_examples": 300}, {"name": "all_docs", "num_bytes": 574063, "num_examples": 883}, {"name": "all_docs_eval", "num_bytes": 573998, "num_examples": 883}, {"name": "train", "num_bytes": 642783, "num_examples": 1483}, {"name": "validation", "num_bytes": 35277, "num_examples": 300}], "download_size": 1652266, "dataset_size": 2610049}} | 2024-02-01T15:50:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train600_eval300_v1_doc_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train600_eval300_v1_doc_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train600_eval300_v1_doc_qa\"\n\nMore Information needed"
] |
1ed6b4175864b91e43af6c95cddd38baff90890d | # Dataset Card for "lmind_hotpot_train500_eval300_v1_docidx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train500_eval300_v1_docidx | [
"region:us"
] | 2024-02-01T15:50:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 84812, "num_examples": 500}, {"name": "train_recite_qa", "num_bytes": 525773, "num_examples": 500}, {"name": "eval_qa", "num_bytes": 49916, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 324839, "num_examples": 300}, {"name": "all_docs", "num_bytes": 738612, "num_examples": 1594}, {"name": "all_docs_eval", "num_bytes": 738503, "num_examples": 1594}, {"name": "train", "num_bytes": 738612, "num_examples": 1594}, {"name": "validation", "num_bytes": 738503, "num_examples": 1594}], "download_size": 2440790, "dataset_size": 3939570}} | 2024-02-01T15:50:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train500_eval300_v1_docidx"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_docidx\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_docidx\"\n\nMore Information needed"
] |
4c5d2060d10ea6beaca503721297152ba7f7306f | # Dataset Card for "lmind_nq_train600_eval300_v1_recite_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train600_eval300_v1_recite_qa | [
"region:us"
] | 2024-02-01T15:50:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 68720, "num_examples": 600}, {"name": "train_recite_qa", "num_bytes": 453011, "num_examples": 600}, {"name": "eval_qa", "num_bytes": 35277, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 226920, "num_examples": 300}, {"name": "all_docs", "num_bytes": 574063, "num_examples": 883}, {"name": "all_docs_eval", "num_bytes": 573998, "num_examples": 883}, {"name": "train", "num_bytes": 1027074, "num_examples": 1483}, {"name": "validation", "num_bytes": 226920, "num_examples": 300}], "download_size": 2010342, "dataset_size": 3185983}} | 2024-02-01T15:50:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train600_eval300_v1_recite_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train600_eval300_v1_recite_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train600_eval300_v1_recite_qa\"\n\nMore Information needed"
] |
64cd09f19820764da4f58c13962099914b9e50e8 | # Dataset Card for "lmind_hotpot_train500_eval300_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train500_eval300_v1_doc_qa | [
"region:us"
] | 2024-02-01T15:50:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 84812, "num_examples": 500}, {"name": "train_recite_qa", "num_bytes": 525773, "num_examples": 500}, {"name": "eval_qa", "num_bytes": 49916, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 324839, "num_examples": 300}, {"name": "all_docs", "num_bytes": 738612, "num_examples": 1594}, {"name": "all_docs_eval", "num_bytes": 738503, "num_examples": 1594}, {"name": "train", "num_bytes": 823424, "num_examples": 2094}, {"name": "validation", "num_bytes": 49916, "num_examples": 300}], "download_size": 2070233, "dataset_size": 3335795}} | 2024-02-01T15:51:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train500_eval300_v1_doc_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_doc_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_doc_qa\"\n\nMore Information needed"
] |
5cdd939ad3d6316112c6fd7c215910027e33dbe5 | # Dataset Card for "lmind_nq_train600_eval300_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train600_eval300_v1_reciteonly_qa | [
"region:us"
] | 2024-02-01T15:50:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 68720, "num_examples": 600}, {"name": "train_recite_qa", "num_bytes": 453011, "num_examples": 600}, {"name": "eval_qa", "num_bytes": 35277, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 226920, "num_examples": 300}, {"name": "all_docs", "num_bytes": 574063, "num_examples": 883}, {"name": "all_docs_eval", "num_bytes": 573998, "num_examples": 883}, {"name": "train", "num_bytes": 453011, "num_examples": 600}, {"name": "validation", "num_bytes": 226920, "num_examples": 300}], "download_size": 1649745, "dataset_size": 2611920}} | 2024-02-01T15:51:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train600_eval300_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train600_eval300_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train600_eval300_v1_reciteonly_qa\"\n\nMore Information needed"
] |
8998130a24e308d4bd125489bbdb9be870028127 | # Dataset Card for "lmind_hotpot_train500_eval300_v1_recite_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train500_eval300_v1_recite_qa | [
"region:us"
] | 2024-02-01T15:51:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 84812, "num_examples": 500}, {"name": "train_recite_qa", "num_bytes": 525773, "num_examples": 500}, {"name": "eval_qa", "num_bytes": 49916, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 324839, "num_examples": 300}, {"name": "all_docs", "num_bytes": 738612, "num_examples": 1594}, {"name": "all_docs_eval", "num_bytes": 738503, "num_examples": 1594}, {"name": "train", "num_bytes": 1264385, "num_examples": 2094}, {"name": "validation", "num_bytes": 324839, "num_examples": 300}], "download_size": 2507316, "dataset_size": 4051679}} | 2024-02-01T15:51:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train500_eval300_v1_recite_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_recite_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_recite_qa\"\n\nMore Information needed"
] |
9e05906af7e3788579b9f5c29e1625a24f396b9e | # Dataset Card for "lmind_hotpot_train500_eval300_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train500_eval300_v1_reciteonly_qa | [
"region:us"
] | 2024-02-01T15:51:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 84812, "num_examples": 500}, {"name": "train_recite_qa", "num_bytes": 525773, "num_examples": 500}, {"name": "eval_qa", "num_bytes": 49916, "num_examples": 300}, {"name": "eval_recite_qa", "num_bytes": 324839, "num_examples": 300}, {"name": "all_docs", "num_bytes": 738612, "num_examples": 1594}, {"name": "all_docs_eval", "num_bytes": 738503, "num_examples": 1594}, {"name": "train", "num_bytes": 525773, "num_examples": 500}, {"name": "validation", "num_bytes": 324839, "num_examples": 300}], "download_size": 2063107, "dataset_size": 3313067}} | 2024-02-01T15:51:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train500_eval300_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train500_eval300_v1_reciteonly_qa\"\n\nMore Information needed"
] |
9db691887ed681e46af417b6a7a1a657495885be | # Dataset Card for "VNTL-v2.5-1.6k-dpo-pairs"
This is a very experimental DPO dataset for VNTL, I have no idea if DPO will work well to improve translation, but I guess it's worth a shot!
This dataset was generated using the model [vntl-7b-v0.3.1](https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-hf) using prompts from the dataset [VNTL-v2.5-1k](https://huggingface.co/datasets/lmg-anon/VNTL-v2.5-1k). All rejected sequences were generated using temperature **0.7**, and they were chosen using a cosine similarity threshold.
Things to consider afterwards:
- **Distilation**: This dataset wasn't filtered in anyway, so there may be pairs that are actually ties or where the chosen sequence is bad.
- https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs
- **Avoid human data**: According to the paper, DPO performs better with sequences sampled directly from the model. Therefore, the dataset could be enhanced by trying to extract the chosen sequences from the model itself.
- https://arxiv.org/html/2305.18290v2#S4.p5.15.1
- **CPO**: CPO may be a better fit than DPO, it is supposedly more forgiving for accuracy, which is better for translation tasks since the translation being correct is better than it being 100% accurate to the chosen sequence.
- https://github.com/fe1ixxu/ALMA | lmg-anon/VNTL-v2.5-1.6k-dpo-pairs | [
"task_categories:translation",
"language:en",
"language:ja",
"dpo",
"region:us"
] | 2024-02-01T15:56:36+00:00 | {"language": ["en", "ja"], "task_categories": ["translation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23750414, "num_examples": 8988}], "download_size": 7587165, "dataset_size": 23750414}, "tags": ["dpo"]} | 2024-02-02T05:48:20+00:00 | [] | [
"en",
"ja"
] | TAGS
#task_categories-translation #language-English #language-Japanese #dpo #region-us
| # Dataset Card for "VNTL-v2.5-1.6k-dpo-pairs"
This is a very experimental DPO dataset for VNTL, I have no idea if DPO will work well to improve translation, but I guess it's worth a shot!
This dataset was generated using the model vntl-7b-v0.3.1 using prompts from the dataset VNTL-v2.5-1k. All rejected sequences were generated using temperature 0.7, and they were chosen using a cosine similarity threshold.
Things to consider afterwards:
- Distilation: This dataset wasn't filtered in anyway, so there may be pairs that are actually ties or where the chosen sequence is bad.
- URL
- Avoid human data: According to the paper, DPO performs better with sequences sampled directly from the model. Therefore, the dataset could be enhanced by trying to extract the chosen sequences from the model itself.
- URL
- CPO: CPO may be a better fit than DPO, it is supposedly more forgiving for accuracy, which is better for translation tasks since the translation being correct is better than it being 100% accurate to the chosen sequence.
- URL | [
"# Dataset Card for \"VNTL-v2.5-1.6k-dpo-pairs\"\n\nThis is a very experimental DPO dataset for VNTL, I have no idea if DPO will work well to improve translation, but I guess it's worth a shot!\n\nThis dataset was generated using the model vntl-7b-v0.3.1 using prompts from the dataset VNTL-v2.5-1k. All rejected sequences were generated using temperature 0.7, and they were chosen using a cosine similarity threshold.\n\nThings to consider afterwards:\n- Distilation: This dataset wasn't filtered in anyway, so there may be pairs that are actually ties or where the chosen sequence is bad.\n - URL\n- Avoid human data: According to the paper, DPO performs better with sequences sampled directly from the model. Therefore, the dataset could be enhanced by trying to extract the chosen sequences from the model itself.\n - URL\n- CPO: CPO may be a better fit than DPO, it is supposedly more forgiving for accuracy, which is better for translation tasks since the translation being correct is better than it being 100% accurate to the chosen sequence.\n - URL"
] | [
"TAGS\n#task_categories-translation #language-English #language-Japanese #dpo #region-us \n",
"# Dataset Card for \"VNTL-v2.5-1.6k-dpo-pairs\"\n\nThis is a very experimental DPO dataset for VNTL, I have no idea if DPO will work well to improve translation, but I guess it's worth a shot!\n\nThis dataset was generated using the model vntl-7b-v0.3.1 using prompts from the dataset VNTL-v2.5-1k. All rejected sequences were generated using temperature 0.7, and they were chosen using a cosine similarity threshold.\n\nThings to consider afterwards:\n- Distilation: This dataset wasn't filtered in anyway, so there may be pairs that are actually ties or where the chosen sequence is bad.\n - URL\n- Avoid human data: According to the paper, DPO performs better with sequences sampled directly from the model. Therefore, the dataset could be enhanced by trying to extract the chosen sequences from the model itself.\n - URL\n- CPO: CPO may be a better fit than DPO, it is supposedly more forgiving for accuracy, which is better for translation tasks since the translation being correct is better than it being 100% accurate to the chosen sequence.\n - URL"
] |
91b4f3ba4cb6675bdc8d482319c7f299b4706467 |
# Napoleon Bonaparte
The Napoleon Bonaparte dataset is a collection of information and data related to the life and reign of Napoleon Bonaparte. It includes details on his military campaigns, battles, and conquests, as well as information on his political career as Emperor of France. The dataset also contains information on the social and economic reforms he implemented in France, such as the establishment of the Napoleonic Code. The data is gathered from various sources, including historical records, biographies, and academic research. | MH0386/napoleon_bonaparte | [
"task_categories:feature-extraction",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2024-02-01T15:57:29+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["feature-extraction", "text-generation"], "pretty_name": "Napoleon Bonaparte"} | 2024-02-15T15:44:36+00:00 | [] | [
"en"
] | TAGS
#task_categories-feature-extraction #task_categories-text-generation #size_categories-10K<n<100K #language-English #region-us
|
# Napoleon Bonaparte
The Napoleon Bonaparte dataset is a collection of information and data related to the life and reign of Napoleon Bonaparte. It includes details on his military campaigns, battles, and conquests, as well as information on his political career as Emperor of France. The dataset also contains information on the social and economic reforms he implemented in France, such as the establishment of the Napoleonic Code. The data is gathered from various sources, including historical records, biographies, and academic research. | [
"# Napoleon Bonaparte\nThe Napoleon Bonaparte dataset is a collection of information and data related to the life and reign of Napoleon Bonaparte. It includes details on his military campaigns, battles, and conquests, as well as information on his political career as Emperor of France. The dataset also contains information on the social and economic reforms he implemented in France, such as the establishment of the Napoleonic Code. The data is gathered from various sources, including historical records, biographies, and academic research."
] | [
"TAGS\n#task_categories-feature-extraction #task_categories-text-generation #size_categories-10K<n<100K #language-English #region-us \n",
"# Napoleon Bonaparte\nThe Napoleon Bonaparte dataset is a collection of information and data related to the life and reign of Napoleon Bonaparte. It includes details on his military campaigns, battles, and conquests, as well as information on his political career as Emperor of France. The dataset also contains information on the social and economic reforms he implemented in France, such as the establishment of the Napoleonic Code. The data is gathered from various sources, including historical records, biographies, and academic research."
] |
0df3c3a973d9675497251d84e93fa917371c4976 | created a total of 50 images
jlbaker361/ddpo-stability-CONDITIONAL std: 0.34964269399642944 mean: 3.874679765701294
jlbaker361/ddpo-stability-dcgan-CONDITIONAL std: 0.30865755677223206 mean: 3.876197566986084 | jlbaker361/stability-ddpo-evaluation-0 | [
"region:us"
] | 2024-02-01T16:02:48+00:00 | {} | 2024-02-02T23:38:09+00:00 | [] | [] | TAGS
#region-us
| created a total of 50 images
jlbaker361/ddpo-stability-CONDITIONAL std: 0.34964269399642944 mean: 3.874679765701294
jlbaker361/ddpo-stability-dcgan-CONDITIONAL std: 0.30865755677223206 mean: 3.876197566986084 | [] | [
"TAGS\n#region-us \n"
] |
330ffe9075489394f20d85fe01ec86955a71ee82 |
# CroissantLLM: A Truly Bilingual French-English Language Model
## Dataset
Ressources are currently being uploaded !
https://arxiv.org/abs/2402.00786
## Licenses
Data redistributed here is subject to the original license under which it was collected. All license information is detailed in the `Data` section of the Technical report.
## Citation
```
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| croissantllm/croissant_dataset_no_web_data | [
"task_categories:translation",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"size_categories:10B<n<100B",
"language:fr",
"language:en",
"arxiv:2402.00786",
"region:us"
] | 2024-02-01T16:06:32+00:00 | {"language": ["fr", "en"], "size_categories": ["10B<n<100B"], "task_categories": ["translation", "text-generation", "text2text-generation", "fill-mask"]} | 2024-02-15T08:45:51+00:00 | [
"2402.00786"
] | [
"fr",
"en"
] | TAGS
#task_categories-translation #task_categories-text-generation #task_categories-text2text-generation #task_categories-fill-mask #size_categories-10B<n<100B #language-French #language-English #arxiv-2402.00786 #region-us
|
# CroissantLLM: A Truly Bilingual French-English Language Model
## Dataset
Ressources are currently being uploaded !
URL
## Licenses
Data redistributed here is subject to the original license under which it was collected. All license information is detailed in the 'Data' section of the Technical report.
| [
"# CroissantLLM: A Truly Bilingual French-English Language Model",
"## Dataset\n\nRessources are currently being uploaded !\n\nURL",
"## Licenses\n\nData redistributed here is subject to the original license under which it was collected. All license information is detailed in the 'Data' section of the Technical report."
] | [
"TAGS\n#task_categories-translation #task_categories-text-generation #task_categories-text2text-generation #task_categories-fill-mask #size_categories-10B<n<100B #language-French #language-English #arxiv-2402.00786 #region-us \n",
"# CroissantLLM: A Truly Bilingual French-English Language Model",
"## Dataset\n\nRessources are currently being uploaded !\n\nURL",
"## Licenses\n\nData redistributed here is subject to the original license under which it was collected. All license information is detailed in the 'Data' section of the Technical report."
] |
9d3f9e303e30bf6925f4403a36c6d5b39196d31e |
<div align="center">
<p>
<a href="https://www.github.com/hichtala/draw" target="_blank">
<img src="https://raw.githubusercontent.com/HichTala/draw/master/figures/banner-draw.png">
</p>
DRAW (which stands for **D**etect and **R**ecognize **A** **W**ild range of cards) is the very first object detector
trained to detect _Yu-Gi-Oh!_ cards in all types of images, and in particular in dueling images.
Other works exist (see [Related Works](#div-aligncenterrelated-worksdiv)) but none is capable of recognizing cards during a duel.
DRAW is entirely open source and all contributions are welcome.
</div>
---
## <div align="center">📄Documentation</div>
<details open>
<summary>
Install
</summary>
Both a docker installation and a more conventional installation are available. If you're not very familiar with all the code,
docker installation is recommended. Otherwise, opt for the classic installation.
#### Docker installation
If you are familiar with docker, the docker image is available [here](https://hub.docker.com/r/hichtala/draw).
Otherwise, I recommend you to download [DockerDesktop](https://www.docker.com/products/docker-desktop/) if you are on Windows.
If you are on Linux, you can refer to the documentation [here](https://docs.docker.com/engine/install/).
Once it is done, you simply have to execute the following command,
```shell
docker run -p 5000:5000 --name draw hichtala/draw:latest
```
Your installation is now completed. You can press `Ctrl+C` and continue to Usage section.
#### Classic installation
You need python to be installed. Python installation isn't going to be detailed here, you can refer to the [documentation](https://www.python.org/).
We first need to install pytorch. It is recommended to use a package manager such as [miniconda](https://docs.conda.io/projects/miniconda/en/latest/).
Please refer to the [documentation](https://docs.conda.io/projects/miniconda/en/latest/).
When everything is set up you can run the following command to install pytorch:
```shell
python -m pip install torch torchvision
```
If you want to use you gpus to make everything run faster, please refer the [documentation](https://pytorch.org/get-started/locally/)
Then you just have to clone the repo and install `requirements`:
```shell
git clone https://github.com/HichTala/draw
cd draw
python -m pip install -r requirements.txt
```
Your installation is now completed.
</details>
<details open>
<summary>Usage</summary>
Now to use it you need to download the models and the data, in section [Models and Data](#div-aligncentermodels-and-datadiv).
Once you have it, follow instruction depending on you have docker or classic installation.
Put all the model in the same folder, and keep the dataset as it is
#### Docker installation
You have to copy the data and models in the container. Execute the following command:
```shell
docker cp path/to/dataset/club_yugioh_dataset draw:/data
docker cp path/to/model/folder draw:/models
```
Once it is done you just have to run the command:
```shell
docker start draw
```
open the adress `localhost:5000`, and enjoy the maximum. Refer [bellow](#both) for details about parameters
#### Classic installation
You need to modify the `config.json` file by putting the paths of you dataset folder in `"data_path"` parameter
and the path to model folder in `"trained_models"` parameter.
Once done, just run:
```shell
flask --app app.py run
```
open the adress `localhost:5000`, and enjoy the maximum. Refer [bellow](#both) for details about parameters
#### Both
* In the first parameter, the one with gears, put the `config.json` file
* In the second parameter, the one with a camera, put the video you want to process (leave it empty to use your webcam)
* In the last one, put your deck list in the format `ydk`
Then you can press the button and start the process !
</details>
---
## <div align="center">⚙️Models and Data</div>
<details open>
<summary>Models</summary>
In this project, the tasks were divided so that one model would locate the card and another model would classify them.
Similarly, to classify the cards, I divided the task so that there is one model for each type of card,
and the model to be used was determined by the color of the card.
Models can be downloaded in <a href="https://huggingface.co/HichTala/draw">Hugging Face</a>.
Models starting with `beit` stands for classification and the one starting with `yolo` for localization.
[](https://huggingface.co/HichTala/draw)
For now only models for "retro" gameplay are available but the ones for classic format play will be added soon.
I considered "retro" format all cards before the first _syncro_ set, so all the cards edited until Light of Destruction set (LODT - 05/13/2008) set and all speed duel cards.
</details>
<details open>
<summary>Data</summary>
To create a dataset, the <a href="https://ygoprodeck.com/api-guide/">YGOPRODeck</a> api was used. Two datasets were thus created,
one for "retro" play and the other for classic format play. Just as there is a model for each type of card,
there is a dataset for each type of card.
Dataset can be downloaded in <a href="">Hugging Face</a>.
[](https://huggingface.co/datasets/HichTala/yugioh_dataset)
For now only "retro" dataset is available, but the one for classic format play will be added soon.
</details>
---
## <div align="center">💡Inspiration</div>
This project is inspired by content creator [SuperZouloux](https://www.youtube.com/watch?v=64-LfbggqKI)'s idea of a hologram bringing _Yu-Gi-Oh!_ cards to life.
His project uses chips inserted under the sleeves of each card,
which are read by the play mat, enabling the cards to be recognized.
Inserting the chips into the sleeves is not only laborious, but also poses another problem:
face-down cards are read in the same way as face-up ones.
So an automatic detector is a really suitable solution.
Although this project was discouraged by _KONAMI_ <sup>®</sup>, the game's publisher (which is quite understandable),
we can nevertheless imagine such a system being used to display the cards played during a live duel,
to allow spectators to read the cards.
---
## <div align="center">🔗Related Works</div>
Although to my knowledge `draw` is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling environment,
other works exist and were a source of inspiration for this project. It's worth mentioning them here.
[Yu-Gi-Oh! NEURON](https://www.konami.com/games/eu/fr/products/yugioh_neuron/) is an official application developed by _KONAMI_ <sup>®</sup>.
It's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at a time, which is very decent.
The drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel context.
What's more, it can't be integrated, so the only way to use it is to use the application.
[yugioh one shot learning](https://github.com/vanstorm9/yugioh-one-shot-learning) made by `vanstorm9` is a
Yu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its classification
model. It gives very impressive results on images with a good quality but not that good on low quality images, and it
can't localize cards.
[Yolov8](https://github.com/ultralytics/ultralytics) is the last version of the very famous `yolo` family of object detector models.
I think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.
[BEiT](https://arxiv.org/pdf/2106.08254.pdf) is a pre-trained model for image classification. It uses image transofrmers
which are based on attention mechanism. It suits our problem because authors also propose a pre-trained model in `Imagenet-22K`.
It is a dataset with 22k classes (more than most classifiers) which is interesting for our case since there is mode than 11k cards in _Yu-Gi-Oh!_.
---
## <div align="center">🔍Method Overview</div>
A medium blog will soon be written and published, explaining the main process from data collection to final prediction.
If you have any questions, don't hesitate to open an issue.
---
## <div align="center">💬Contact</div>
You can reach me on Twitter [@tiazden](https://twitter.com/tiazden) or by email at [[email protected]](mailto:[email protected]). | HichTala/yugioh_dataset | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"arxiv:2106.08254",
"region:us"
] | 2024-02-01T16:08:22+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"]} | 2024-02-09T15:10:52+00:00 | [
"2106.08254"
] | [
"en"
] | TAGS
#task_categories-image-classification #size_categories-1K<n<10K #language-English #license-mit #arxiv-2106.08254 #region-us
|
<div align="center">
<p>
<a href="URL target="_blank">
<img src="URL
</p>
DRAW (which stands for Detect and Recognize A Wild range of cards) is the very first object detector
trained to detect _Yu-Gi-Oh!_ cards in all types of images, and in particular in dueling images.
Other works exist (see Related Works) but none is capable of recognizing cards during a duel.
DRAW is entirely open source and all contributions are welcome.
</div>
---
## <div align="center">Documentation</div>
<details open>
<summary>
Install
</summary>
Both a docker installation and a more conventional installation are available. If you're not very familiar with all the code,
docker installation is recommended. Otherwise, opt for the classic installation.
#### Docker installation
If you are familiar with docker, the docker image is available here.
Otherwise, I recommend you to download DockerDesktop if you are on Windows.
If you are on Linux, you can refer to the documentation here.
Once it is done, you simply have to execute the following command,
Your installation is now completed. You can press 'Ctrl+C' and continue to Usage section.
#### Classic installation
You need python to be installed. Python installation isn't going to be detailed here, you can refer to the documentation.
We first need to install pytorch. It is recommended to use a package manager such as miniconda.
Please refer to the documentation.
When everything is set up you can run the following command to install pytorch:
If you want to use you gpus to make everything run faster, please refer the documentation
Then you just have to clone the repo and install 'requirements':
Your installation is now completed.
</details>
<details open>
<summary>Usage</summary>
Now to use it you need to download the models and the data, in section Models and Data.
Once you have it, follow instruction depending on you have docker or classic installation.
Put all the model in the same folder, and keep the dataset as it is
#### Docker installation
You have to copy the data and models in the container. Execute the following command:
Once it is done you just have to run the command:
open the adress 'localhost:5000', and enjoy the maximum. Refer bellow for details about parameters
#### Classic installation
You need to modify the 'URL' file by putting the paths of you dataset folder in '"data_path"' parameter
and the path to model folder in '"trained_models"' parameter.
Once done, just run:
open the adress 'localhost:5000', and enjoy the maximum. Refer bellow for details about parameters
#### Both
* In the first parameter, the one with gears, put the 'URL' file
* In the second parameter, the one with a camera, put the video you want to process (leave it empty to use your webcam)
* In the last one, put your deck list in the format 'ydk'
Then you can press the button and start the process !
</details>
---
## <div align="center">️Models and Data</div>
<details open>
<summary>Models</summary>
In this project, the tasks were divided so that one model would locate the card and another model would classify them.
Similarly, to classify the cards, I divided the task so that there is one model for each type of card,
and the model to be used was determined by the color of the card.
Models can be downloaded in <a href="URL Face</a>.
Models starting with 'beit' stands for classification and the one starting with 'yolo' for localization.
 set and all speed duel cards.
</details>
<details open>
<summary>Data</summary>
To create a dataset, the <a href="URL api was used. Two datasets were thus created,
one for "retro" play and the other for classic format play. Just as there is a model for each type of card,
there is a dataset for each type of card.
Dataset can be downloaded in <a href="">Hugging Face</a>.
,
we can nevertheless imagine such a system being used to display the cards played during a live duel,
to allow spectators to read the cards.
---
## <div align="center">Related Works</div>
Although to my knowledge 'draw' is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling environment,
other works exist and were a source of inspiration for this project. It's worth mentioning them here.
Yu-Gi-Oh! NEURON is an official application developed by _KONAMI_ <sup>®</sup>.
It's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at a time, which is very decent.
The drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel context.
What's more, it can't be integrated, so the only way to use it is to use the application.
yugioh one shot learning made by 'vanstorm9' is a
Yu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its classification
model. It gives very impressive results on images with a good quality but not that good on low quality images, and it
can't localize cards.
Yolov8 is the last version of the very famous 'yolo' family of object detector models.
I think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.
BEiT is a pre-trained model for image classification. It uses image transofrmers
which are based on attention mechanism. It suits our problem because authors also propose a pre-trained model in 'Imagenet-22K'.
It is a dataset with 22k classes (more than most classifiers) which is interesting for our case since there is mode than 11k cards in _Yu-Gi-Oh!_.
---
## <div align="center">Method Overview</div>
A medium blog will soon be written and published, explaining the main process from data collection to final prediction.
If you have any questions, don't hesitate to open an issue.
---
## <div align="center">Contact</div>
You can reach me on Twitter @tiazden or by email at URL@URL. | [
"## <div align=\"center\">Documentation</div>\n\n<details open>\n<summary>\nInstall\n</summary>\n\nBoth a docker installation and a more conventional installation are available. If you're not very familiar with all the code, \ndocker installation is recommended. Otherwise, opt for the classic installation.",
"#### Docker installation\n\nIf you are familiar with docker, the docker image is available here.\n\nOtherwise, I recommend you to download DockerDesktop if you are on Windows.\nIf you are on Linux, you can refer to the documentation here.\n\nOnce it is done, you simply have to execute the following command,\n\nYour installation is now completed. You can press 'Ctrl+C' and continue to Usage section.",
"#### Classic installation\n\nYou need python to be installed. Python installation isn't going to be detailed here, you can refer to the documentation.\n\nWe first need to install pytorch. It is recommended to use a package manager such as miniconda. \nPlease refer to the documentation.\n\nWhen everything is set up you can run the following command to install pytorch:\n\nIf you want to use you gpus to make everything run faster, please refer the documentation\n\nThen you just have to clone the repo and install 'requirements':\n\n\nYour installation is now completed.\n\n</details>\n\n<details open>\n<summary>Usage</summary>\n\nNow to use it you need to download the models and the data, in section Models and Data.\n\nOnce you have it, follow instruction depending on you have docker or classic installation.\nPut all the model in the same folder, and keep the dataset as it is",
"#### Docker installation\n\nYou have to copy the data and models in the container. Execute the following command:\n\n\n\nOnce it is done you just have to run the command:\n\nopen the adress 'localhost:5000', and enjoy the maximum. Refer bellow for details about parameters",
"#### Classic installation\n\nYou need to modify the 'URL' file by putting the paths of you dataset folder in '\"data_path\"' parameter \nand the path to model folder in '\"trained_models\"' parameter.\n\nOnce done, just run:\n\nopen the adress 'localhost:5000', and enjoy the maximum. Refer bellow for details about parameters",
"#### Both\n\n* In the first parameter, the one with gears, put the 'URL' file\n* In the second parameter, the one with a camera, put the video you want to process (leave it empty to use your webcam)\n* In the last one, put your deck list in the format 'ydk'\n\nThen you can press the button and start the process !\n\n</details>\n\n---",
"## <div align=\"center\">️Models and Data</div>\n\n<details open>\n<summary>Models</summary>\n\nIn this project, the tasks were divided so that one model would locate the card and another model would classify them. \nSimilarly, to classify the cards, I divided the task so that there is one model for each type of card,\nand the model to be used was determined by the color of the card.\n\nModels can be downloaded in <a href=\"URL Face</a>. \nModels starting with 'beit' stands for classification and the one starting with 'yolo' for localization.\n\n set and all speed duel cards. \n\n</details>\n\n<details open>\n<summary>Data</summary>\n\nTo create a dataset, the <a href=\"URL api was used. Two datasets were thus created, \none for \"retro\" play and the other for classic format play. Just as there is a model for each type of card,\nthere is a dataset for each type of card.\n\nDataset can be downloaded in <a href=\"\">Hugging Face</a>.\n\n,\nwe can nevertheless imagine such a system being used to display the cards played during a live duel, \nto allow spectators to read the cards.\n\n---",
"## <div align=\"center\">Related Works</div>\n\nAlthough to my knowledge 'draw' is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling environment, \nother works exist and were a source of inspiration for this project. It's worth mentioning them here.\n\nYu-Gi-Oh! NEURON is an official application developed by _KONAMI_ <sup>®</sup>.\nIt's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at a time, which is very decent. \nThe drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel context. \nWhat's more, it can't be integrated, so the only way to use it is to use the application.\n\nyugioh one shot learning made by 'vanstorm9' is a \nYu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its classification\nmodel. It gives very impressive results on images with a good quality but not that good on low quality images, and it \ncan't localize cards.\n\nYolov8 is the last version of the very famous 'yolo' family of object detector models.\nI think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.\n\nBEiT is a pre-trained model for image classification. It uses image transofrmers \nwhich are based on attention mechanism. It suits our problem because authors also propose a pre-trained model in 'Imagenet-22K'.\nIt is a dataset with 22k classes (more than most classifiers) which is interesting for our case since there is mode than 11k cards in _Yu-Gi-Oh!_. \n\n---",
"## <div align=\"center\">Method Overview</div>\n\nA medium blog will soon be written and published, explaining the main process from data collection to final prediction.\nIf you have any questions, don't hesitate to open an issue.\n\n---",
"## <div align=\"center\">Contact</div>\n\nYou can reach me on Twitter @tiazden or by email at URL@URL."
] | [
"TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #language-English #license-mit #arxiv-2106.08254 #region-us \n",
"## <div align=\"center\">Documentation</div>\n\n<details open>\n<summary>\nInstall\n</summary>\n\nBoth a docker installation and a more conventional installation are available. If you're not very familiar with all the code, \ndocker installation is recommended. Otherwise, opt for the classic installation.",
"#### Docker installation\n\nIf you are familiar with docker, the docker image is available here.\n\nOtherwise, I recommend you to download DockerDesktop if you are on Windows.\nIf you are on Linux, you can refer to the documentation here.\n\nOnce it is done, you simply have to execute the following command,\n\nYour installation is now completed. You can press 'Ctrl+C' and continue to Usage section.",
"#### Classic installation\n\nYou need python to be installed. Python installation isn't going to be detailed here, you can refer to the documentation.\n\nWe first need to install pytorch. It is recommended to use a package manager such as miniconda. \nPlease refer to the documentation.\n\nWhen everything is set up you can run the following command to install pytorch:\n\nIf you want to use you gpus to make everything run faster, please refer the documentation\n\nThen you just have to clone the repo and install 'requirements':\n\n\nYour installation is now completed.\n\n</details>\n\n<details open>\n<summary>Usage</summary>\n\nNow to use it you need to download the models and the data, in section Models and Data.\n\nOnce you have it, follow instruction depending on you have docker or classic installation.\nPut all the model in the same folder, and keep the dataset as it is",
"#### Docker installation\n\nYou have to copy the data and models in the container. Execute the following command:\n\n\n\nOnce it is done you just have to run the command:\n\nopen the adress 'localhost:5000', and enjoy the maximum. Refer bellow for details about parameters",
"#### Classic installation\n\nYou need to modify the 'URL' file by putting the paths of you dataset folder in '\"data_path\"' parameter \nand the path to model folder in '\"trained_models\"' parameter.\n\nOnce done, just run:\n\nopen the adress 'localhost:5000', and enjoy the maximum. Refer bellow for details about parameters",
"#### Both\n\n* In the first parameter, the one with gears, put the 'URL' file\n* In the second parameter, the one with a camera, put the video you want to process (leave it empty to use your webcam)\n* In the last one, put your deck list in the format 'ydk'\n\nThen you can press the button and start the process !\n\n</details>\n\n---",
"## <div align=\"center\">️Models and Data</div>\n\n<details open>\n<summary>Models</summary>\n\nIn this project, the tasks were divided so that one model would locate the card and another model would classify them. \nSimilarly, to classify the cards, I divided the task so that there is one model for each type of card,\nand the model to be used was determined by the color of the card.\n\nModels can be downloaded in <a href=\"URL Face</a>. \nModels starting with 'beit' stands for classification and the one starting with 'yolo' for localization.\n\n set and all speed duel cards. \n\n</details>\n\n<details open>\n<summary>Data</summary>\n\nTo create a dataset, the <a href=\"URL api was used. Two datasets were thus created, \none for \"retro\" play and the other for classic format play. Just as there is a model for each type of card,\nthere is a dataset for each type of card.\n\nDataset can be downloaded in <a href=\"\">Hugging Face</a>.\n\n,\nwe can nevertheless imagine such a system being used to display the cards played during a live duel, \nto allow spectators to read the cards.\n\n---",
"## <div align=\"center\">Related Works</div>\n\nAlthough to my knowledge 'draw' is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling environment, \nother works exist and were a source of inspiration for this project. It's worth mentioning them here.\n\nYu-Gi-Oh! NEURON is an official application developed by _KONAMI_ <sup>®</sup>.\nIt's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at a time, which is very decent. \nThe drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel context. \nWhat's more, it can't be integrated, so the only way to use it is to use the application.\n\nyugioh one shot learning made by 'vanstorm9' is a \nYu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its classification\nmodel. It gives very impressive results on images with a good quality but not that good on low quality images, and it \ncan't localize cards.\n\nYolov8 is the last version of the very famous 'yolo' family of object detector models.\nI think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.\n\nBEiT is a pre-trained model for image classification. It uses image transofrmers \nwhich are based on attention mechanism. It suits our problem because authors also propose a pre-trained model in 'Imagenet-22K'.\nIt is a dataset with 22k classes (more than most classifiers) which is interesting for our case since there is mode than 11k cards in _Yu-Gi-Oh!_. \n\n---",
"## <div align=\"center\">Method Overview</div>\n\nA medium blog will soon be written and published, explaining the main process from data collection to final prediction.\nIf you have any questions, don't hesitate to open an issue.\n\n---",
"## <div align=\"center\">Contact</div>\n\nYou can reach me on Twitter @tiazden or by email at URL@URL."
] |
56def96440a8de48a3e1776265f839cde161e367 | **MultiParaDetox**
This is the multilingual parallel dataset for text detoxification prepared for [CLEF TextDetox 2024](https://pan.webis.de/clef24/pan24-web/text-detoxification.html) shared task.
For each of 9 languages, we collected 1k pairs of toxic<->detoxified instances splitted into two parts: dev (400 pairs) and test (600 pairs).
**Dev set references and test set toxic sentences will be released later!**
The list of the sources for the original toxic sentences:
* English: [Jigsaw](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge), [Unitary AI Toxicity Dataset](https://github.com/unitaryai/detoxify)
* Russian: [Russian Language Toxic Comments](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments), [Toxic Russian Comments](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)
* Ukrainian: [Ukrainian Twitter texts](https://github.com/saganoren/ukr-twi-corpus)
* Spanish: TBD
* German: [GemEval 2018, 2021](https://aclanthology.org/2021.germeval-1.1/)
* Amhairc: [Amharic Hate Speech](https://github.com/uhh-lt/AmharicHateSpeech)
* Arabic: [OSACT4](https://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/)
* Hindi: [Hostility Detection Dataset in Hindi](https://competitions.codalab.org/competitions/26654#learn_the_details-dataset), [Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages](https://dl.acm.org/doi/pdf/10.1145/3368567.3368584?download=true) | textdetox/multilingual_paradetox | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"language:uk",
"language:ru",
"language:de",
"language:zh",
"language:am",
"language:ar",
"language:hi",
"language:es",
"license:openrail++",
"region:us"
] | 2024-02-01T16:31:13+00:00 | {"language": ["en", "uk", "ru", "de", "zh", "am", "ar", "hi", "es"], "license": "openrail++", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "toxic_sentence", "dtype": "string"}], "splits": [{"name": "en", "num_bytes": 24945, "num_examples": 400}, {"name": "ru", "num_bytes": 48249, "num_examples": 400}, {"name": "uk", "num_bytes": 40226, "num_examples": 400}, {"name": "de", "num_bytes": 44940, "num_examples": 400}, {"name": "am", "num_bytes": 72606, "num_examples": 400}, {"name": "zh", "num_bytes": 36219, "num_examples": 400}, {"name": "ar", "num_bytes": 44668, "num_examples": 400}, {"name": "hi", "num_bytes": 57291, "num_examples": 400}], "download_size": 234067, "dataset_size": 369144}, "configs": [{"config_name": "default", "data_files": [{"split": "en", "path": "data/en-*"}, {"split": "ru", "path": "data/ru-*"}, {"split": "uk", "path": "data/uk-*"}, {"split": "de", "path": "data/de-*"}, {"split": "am", "path": "data/am-*"}, {"split": "zh", "path": "data/zh-*"}, {"split": "ar", "path": "data/ar-*"}, {"split": "hi", "path": "data/hi-*"}]}]} | 2024-02-14T09:29:24+00:00 | [] | [
"en",
"uk",
"ru",
"de",
"zh",
"am",
"ar",
"hi",
"es"
] | TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #language-Ukrainian #language-Russian #language-German #language-Chinese #language-Amharic #language-Arabic #language-Hindi #language-Spanish #license-openrail++ #region-us
| MultiParaDetox
This is the multilingual parallel dataset for text detoxification prepared for CLEF TextDetox 2024 shared task.
For each of 9 languages, we collected 1k pairs of toxic<->detoxified instances splitted into two parts: dev (400 pairs) and test (600 pairs).
Dev set references and test set toxic sentences will be released later!
The list of the sources for the original toxic sentences:
* English: Jigsaw, Unitary AI Toxicity Dataset
* Russian: Russian Language Toxic Comments, Toxic Russian Comments
* Ukrainian: Ukrainian Twitter texts
* Spanish: TBD
* German: GemEval 2018, 2021
* Amhairc: Amharic Hate Speech
* Arabic: OSACT4
* Hindi: Hostility Detection Dataset in Hindi, Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages | [] | [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #language-Ukrainian #language-Russian #language-German #language-Chinese #language-Amharic #language-Arabic #language-Hindi #language-Spanish #license-openrail++ #region-us \n"
] |
e8bc282600cf9d29cfc8d2b166a4abe1d834aae7 |
# Dataset Card for Evaluation run of Gille/StrangeMerges_9-7B-dare_ties
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Gille/StrangeMerges_9-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_9-7B-dare_ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Gille__StrangeMerges_9-7B-dare_ties",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T16:46:42.889745](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_9-7B-dare_ties/blob/main/results_2024-02-01T16-46-42.889745.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6552529876820988,
"acc_stderr": 0.032059624813676246,
"acc_norm": 0.6554972315448003,
"acc_norm_stderr": 0.0327183378846795,
"mc1": 0.48959608323133413,
"mc1_stderr": 0.017499711430249264,
"mc2": 0.6508442171488594,
"mc2_stderr": 0.015062470027570826
},
"harness|arc:challenge|25": {
"acc": 0.6638225255972696,
"acc_stderr": 0.01380485502620576,
"acc_norm": 0.7030716723549488,
"acc_norm_stderr": 0.013352025976725223
},
"harness|hellaswag|10": {
"acc": 0.6904003186616212,
"acc_stderr": 0.0046138371388100355,
"acc_norm": 0.8746265684126668,
"acc_norm_stderr": 0.0033046510372765512
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6592592592592592,
"acc_stderr": 0.040943762699967926,
"acc_norm": 0.6592592592592592,
"acc_norm_stderr": 0.040943762699967926
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.037385206761196686,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.037385206761196686
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695238,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754406,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754406
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404904,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404904
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083522,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083522
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8282828282828283,
"acc_stderr": 0.026869716187429917,
"acc_norm": 0.8282828282828283,
"acc_norm_stderr": 0.026869716187429917
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.029185714949857413,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.029185714949857413
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.039439666991836285,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.039439666991836285
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.01563002297009244,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.01563002297009244
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.02646056956124064,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.02646056956124064
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.0384985609879409,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.0384985609879409
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8326947637292464,
"acc_stderr": 0.013347327202920332,
"acc_norm": 0.8326947637292464,
"acc_norm_stderr": 0.013347327202920332
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.023445826276545546,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.023445826276545546
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41899441340782123,
"acc_stderr": 0.016501579306861677,
"acc_norm": 0.41899441340782123,
"acc_norm_stderr": 0.016501579306861677
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4661016949152542,
"acc_stderr": 0.01274085387294983,
"acc_norm": 0.4661016949152542,
"acc_norm_stderr": 0.01274085387294983
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.02824568739146294,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.02824568739146294
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.018999707383162673,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.018999707383162673
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8606965174129353,
"acc_stderr": 0.024484487162913973,
"acc_norm": 0.8606965174129353,
"acc_norm_stderr": 0.024484487162913973
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.48959608323133413,
"mc1_stderr": 0.017499711430249264,
"mc2": 0.6508442171488594,
"mc2_stderr": 0.015062470027570826
},
"harness|winogrande|5": {
"acc": 0.813733228097869,
"acc_stderr": 0.01094187795567621
},
"harness|gsm8k|5": {
"acc": 0.7058377558756633,
"acc_stderr": 0.012551285331470157
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Gille__StrangeMerges_9-7B-dare_ties | [
"region:us"
] | 2024-02-01T16:49:01+00:00 | {"pretty_name": "Evaluation run of Gille/StrangeMerges_9-7B-dare_ties", "dataset_summary": "Dataset automatically created during the evaluation run of model [Gille/StrangeMerges_9-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_9-7B-dare_ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Gille__StrangeMerges_9-7B-dare_ties\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T16:46:42.889745](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_9-7B-dare_ties/blob/main/results_2024-02-01T16-46-42.889745.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6552529876820988,\n \"acc_stderr\": 0.032059624813676246,\n \"acc_norm\": 0.6554972315448003,\n \"acc_norm_stderr\": 0.0327183378846795,\n \"mc1\": 0.48959608323133413,\n \"mc1_stderr\": 0.017499711430249264,\n \"mc2\": 0.6508442171488594,\n \"mc2_stderr\": 0.015062470027570826\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6638225255972696,\n \"acc_stderr\": 0.01380485502620576,\n \"acc_norm\": 0.7030716723549488,\n \"acc_norm_stderr\": 0.013352025976725223\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6904003186616212,\n \"acc_stderr\": 0.0046138371388100355,\n \"acc_norm\": 0.8746265684126668,\n \"acc_norm_stderr\": 0.0033046510372765512\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n \"acc_stderr\": 0.040943762699967926,\n \"acc_norm\": 0.6592592592592592,\n \"acc_norm_stderr\": 0.040943762699967926\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695238,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695238\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754406,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754406\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404904,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404904\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083522,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083522\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8282828282828283,\n \"acc_stderr\": 0.026869716187429917,\n \"acc_norm\": 0.8282828282828283,\n \"acc_norm_stderr\": 0.026869716187429917\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857413,\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857413\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.039439666991836285,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.039439666991836285\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8422018348623853,\n \"acc_stderr\": 0.01563002297009244,\n \"acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.01563002297009244\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8284313725490197,\n \"acc_stderr\": 0.02646056956124064,\n \"acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.02646056956124064\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.0384985609879409,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.0384985609879409\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8326947637292464,\n \"acc_stderr\": 0.013347327202920332,\n \"acc_norm\": 0.8326947637292464,\n \"acc_norm_stderr\": 0.013347327202920332\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.023445826276545546,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.023445826276545546\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41899441340782123,\n \"acc_stderr\": 0.016501579306861677,\n \"acc_norm\": 0.41899441340782123,\n \"acc_norm_stderr\": 0.016501579306861677\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4661016949152542,\n \"acc_stderr\": 0.01274085387294983,\n \"acc_norm\": 0.4661016949152542,\n \"acc_norm_stderr\": 0.01274085387294983\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.02824568739146294,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.02824568739146294\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.48959608323133413,\n \"mc1_stderr\": 0.017499711430249264,\n \"mc2\": 0.6508442171488594,\n \"mc2_stderr\": 0.015062470027570826\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.813733228097869,\n \"acc_stderr\": 0.01094187795567621\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7058377558756633,\n \"acc_stderr\": 0.012551285331470157\n }\n}\n```", "repo_url": "https://huggingface.co/Gille/StrangeMerges_9-7B-dare_ties", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-46-42.889745.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["**/details_harness|winogrande|5_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T16-46-42.889745.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T16_46_42.889745", "path": ["results_2024-02-01T16-46-42.889745.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T16-46-42.889745.parquet"]}]}]} | 2024-02-01T16:49:27+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Gille/StrangeMerges_9-7B-dare_ties
Dataset automatically created during the evaluation run of model Gille/StrangeMerges_9-7B-dare_ties on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T16:46:42.889745(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Gille/StrangeMerges_9-7B-dare_ties\n\n\n\nDataset automatically created during the evaluation run of model Gille/StrangeMerges_9-7B-dare_ties on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:46:42.889745(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Gille/StrangeMerges_9-7B-dare_ties\n\n\n\nDataset automatically created during the evaluation run of model Gille/StrangeMerges_9-7B-dare_ties on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:46:42.889745(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
085124f152c8adceda16b9aba922dca68b2a10c1 |
# Dataset Card for Evaluation run of notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1](https://huggingface.co/notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T16:47:43.870919](https://huggingface.co/datasets/open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1/blob/main/results_2024-02-01T16-47-43.870919.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.608213540240799,
"acc_stderr": 0.03315279862254355,
"acc_norm": 0.6128927690011974,
"acc_norm_stderr": 0.03382542868703408,
"mc1": 0.5275397796817626,
"mc1_stderr": 0.01747693019071219,
"mc2": 0.6811241660222933,
"mc2_stderr": 0.015196421629330473
},
"harness|arc:challenge|25": {
"acc": 0.591296928327645,
"acc_stderr": 0.014365750345426998,
"acc_norm": 0.6305460750853242,
"acc_norm_stderr": 0.014104578366491888
},
"harness|hellaswag|10": {
"acc": 0.6683927504481179,
"acc_stderr": 0.004698285350019217,
"acc_norm": 0.8488348934475204,
"acc_norm_stderr": 0.003574776594108505
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.039777499346220734,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.039777499346220734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6679245283018868,
"acc_stderr": 0.02898545565233439,
"acc_norm": 0.6679245283018868,
"acc_norm_stderr": 0.02898545565233439
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6875,
"acc_stderr": 0.038760854559127644,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.038760854559127644
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5895953757225434,
"acc_stderr": 0.03750757044895536,
"acc_norm": 0.5895953757225434,
"acc_norm_stderr": 0.03750757044895536
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5361702127659574,
"acc_stderr": 0.032600385118357715,
"acc_norm": 0.5361702127659574,
"acc_norm_stderr": 0.032600385118357715
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583703,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583703
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6275862068965518,
"acc_stderr": 0.04028731532947558,
"acc_norm": 0.6275862068965518,
"acc_norm_stderr": 0.04028731532947558
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.025010749116137602,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.025010749116137602
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6225806451612903,
"acc_stderr": 0.027575960723278236,
"acc_norm": 0.6225806451612903,
"acc_norm_stderr": 0.027575960723278236
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7393939393939394,
"acc_stderr": 0.034277431758165236,
"acc_norm": 0.7393939393939394,
"acc_norm_stderr": 0.034277431758165236
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198896,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198896
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306443,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5615384615384615,
"acc_stderr": 0.02515826601686858,
"acc_norm": 0.5615384615384615,
"acc_norm_stderr": 0.02515826601686858
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.027940457136228395,
"acc_norm": 0.3,
"acc_norm_stderr": 0.027940457136228395
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135363,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135363
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7908256880733945,
"acc_stderr": 0.017437937173343233,
"acc_norm": 0.7908256880733945,
"acc_norm_stderr": 0.017437937173343233
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4537037037037037,
"acc_stderr": 0.03395322726375797,
"acc_norm": 0.4537037037037037,
"acc_norm_stderr": 0.03395322726375797
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.029771775228145624,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.029771775228145624
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7552742616033755,
"acc_stderr": 0.027985699387036423,
"acc_norm": 0.7552742616033755,
"acc_norm_stderr": 0.027985699387036423
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6143497757847534,
"acc_stderr": 0.03266842214289201,
"acc_norm": 0.6143497757847534,
"acc_norm_stderr": 0.03266842214289201
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615624,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615624
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.047184714852195886,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.047184714852195886
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597552,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597552
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7790549169859514,
"acc_stderr": 0.014836205167333558,
"acc_norm": 0.7790549169859514,
"acc_norm_stderr": 0.014836205167333558
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.024818350129436593,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.024818350129436593
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3195530726256983,
"acc_stderr": 0.015595520294147411,
"acc_norm": 0.3195530726256983,
"acc_norm_stderr": 0.015595520294147411
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.02671611838015685,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.02671611838015685
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495033,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495033
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.450354609929078,
"acc_stderr": 0.029680105565029036,
"acc_norm": 0.450354609929078,
"acc_norm_stderr": 0.029680105565029036
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4335071707953064,
"acc_stderr": 0.012656810383983964,
"acc_norm": 0.4335071707953064,
"acc_norm_stderr": 0.012656810383983964
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6102941176470589,
"acc_stderr": 0.0296246635811597,
"acc_norm": 0.6102941176470589,
"acc_norm_stderr": 0.0296246635811597
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.019488025745529675,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.019488025745529675
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.043091187099464585,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.043091187099464585
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7061224489795919,
"acc_stderr": 0.02916273841024977,
"acc_norm": 0.7061224489795919,
"acc_norm_stderr": 0.02916273841024977
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7263681592039801,
"acc_stderr": 0.031524391865554016,
"acc_norm": 0.7263681592039801,
"acc_norm_stderr": 0.031524391865554016
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333047,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333047
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5275397796817626,
"mc1_stderr": 0.01747693019071219,
"mc2": 0.6811241660222933,
"mc2_stderr": 0.015196421629330473
},
"harness|winogrande|5": {
"acc": 0.771112865035517,
"acc_stderr": 0.01180736022402539
},
"harness|gsm8k|5": {
"acc": 0.3889310083396513,
"acc_stderr": 0.013428382481274249
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1 | [
"region:us"
] | 2024-02-01T16:49:58+00:00 | {"pretty_name": "Evaluation run of notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1](https://huggingface.co/notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T16:47:43.870919](https://huggingface.co/datasets/open-llm-leaderboard/details_notadib__Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1/blob/main/results_2024-02-01T16-47-43.870919.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.608213540240799,\n \"acc_stderr\": 0.03315279862254355,\n \"acc_norm\": 0.6128927690011974,\n \"acc_norm_stderr\": 0.03382542868703408,\n \"mc1\": 0.5275397796817626,\n \"mc1_stderr\": 0.01747693019071219,\n \"mc2\": 0.6811241660222933,\n \"mc2_stderr\": 0.015196421629330473\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.591296928327645,\n \"acc_stderr\": 0.014365750345426998,\n \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491888\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6683927504481179,\n \"acc_stderr\": 0.004698285350019217,\n \"acc_norm\": 0.8488348934475204,\n \"acc_norm_stderr\": 0.003574776594108505\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5895953757225434,\n \"acc_stderr\": 0.03750757044895536,\n \"acc_norm\": 0.5895953757225434,\n \"acc_norm_stderr\": 0.03750757044895536\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5361702127659574,\n \"acc_stderr\": 0.032600385118357715,\n \"acc_norm\": 0.5361702127659574,\n \"acc_norm_stderr\": 0.032600385118357715\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.04615186962583703,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.04615186962583703\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947558,\n \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947558\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137602,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137602\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6225806451612903,\n \"acc_stderr\": 0.027575960723278236,\n \"acc_norm\": 0.6225806451612903,\n \"acc_norm_stderr\": 0.027575960723278236\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7393939393939394,\n \"acc_stderr\": 0.034277431758165236,\n \"acc_norm\": 0.7393939393939394,\n \"acc_norm_stderr\": 0.034277431758165236\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306443,\n \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5615384615384615,\n \"acc_stderr\": 0.02515826601686858,\n \"acc_norm\": 0.5615384615384615,\n \"acc_norm_stderr\": 0.02515826601686858\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.027940457136228395,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.027940457136228395\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7908256880733945,\n \"acc_stderr\": 0.017437937173343233,\n \"acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.017437937173343233\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4537037037037037,\n \"acc_stderr\": 0.03395322726375797,\n \"acc_norm\": 0.4537037037037037,\n \"acc_norm_stderr\": 0.03395322726375797\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.029771775228145624,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.029771775228145624\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7552742616033755,\n \"acc_stderr\": 0.027985699387036423,\n \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.027985699387036423\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6143497757847534,\n \"acc_stderr\": 0.03266842214289201,\n \"acc_norm\": 0.6143497757847534,\n \"acc_norm_stderr\": 0.03266842214289201\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615624,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615624\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.047184714852195886,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.047184714852195886\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597552,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597552\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7790549169859514,\n \"acc_stderr\": 0.014836205167333558,\n \"acc_norm\": 0.7790549169859514,\n \"acc_norm_stderr\": 0.014836205167333558\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.024818350129436593,\n \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.024818350129436593\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3195530726256983,\n \"acc_stderr\": 0.015595520294147411,\n \"acc_norm\": 0.3195530726256983,\n \"acc_norm_stderr\": 0.015595520294147411\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.02671611838015685,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.02671611838015685\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495033,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495033\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4335071707953064,\n \"acc_stderr\": 0.012656810383983964,\n \"acc_norm\": 0.4335071707953064,\n \"acc_norm_stderr\": 0.012656810383983964\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6102941176470589,\n \"acc_stderr\": 0.0296246635811597,\n \"acc_norm\": 0.6102941176470589,\n \"acc_norm_stderr\": 0.0296246635811597\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.019488025745529675,\n \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.019488025745529675\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024977,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024977\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7263681592039801,\n \"acc_stderr\": 0.031524391865554016,\n \"acc_norm\": 0.7263681592039801,\n \"acc_norm_stderr\": 0.031524391865554016\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333047,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333047\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5275397796817626,\n \"mc1_stderr\": 0.01747693019071219,\n \"mc2\": 0.6811241660222933,\n \"mc2_stderr\": 0.015196421629330473\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.771112865035517,\n \"acc_stderr\": 0.01180736022402539\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3889310083396513,\n \"acc_stderr\": 0.013428382481274249\n }\n}\n```", "repo_url": "https://huggingface.co/notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-47-43.870919.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["**/details_harness|winogrande|5_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T16-47-43.870919.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T16_47_43.870919", "path": ["results_2024-02-01T16-47-43.870919.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T16-47-43.870919.parquet"]}]}]} | 2024-02-01T16:50:24+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1
Dataset automatically created during the evaluation run of model notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T16:47:43.870919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1\n\n\n\nDataset automatically created during the evaluation run of model notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:47:43.870919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1\n\n\n\nDataset automatically created during the evaluation run of model notadib/Mistral-7B-Instruct-v0.2-attention-sparsity-10-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:47:43.870919(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
9bfe04283fe10129937583b288ce37978b407751 |
# Dataset Card for Evaluation run of Wanfq/FuseLLM-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Wanfq/FuseLLM-7B](https://huggingface.co/Wanfq/FuseLLM-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Wanfq__FuseLLM-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T16:48:44.963342](https://huggingface.co/datasets/open-llm-leaderboard/details_Wanfq__FuseLLM-7B/blob/main/results_2024-02-01T16-48-44.963342.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4801705727969372,
"acc_stderr": 0.034512318476802376,
"acc_norm": 0.48541393567248253,
"acc_norm_stderr": 0.035295245059963676,
"mc1": 0.2460220318237454,
"mc1_stderr": 0.01507721920066259,
"mc2": 0.38170797049642685,
"mc2_stderr": 0.013464482874681617
},
"harness|arc:challenge|25": {
"acc": 0.4991467576791809,
"acc_stderr": 0.014611369529813272,
"acc_norm": 0.5324232081911263,
"acc_norm_stderr": 0.01458063756999542
},
"harness|hellaswag|10": {
"acc": 0.5878311093407688,
"acc_stderr": 0.004912192800263312,
"acc_norm": 0.7871937860983867,
"acc_norm_stderr": 0.00408455264190366
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04063302731486671,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04063302731486671
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.44528301886792454,
"acc_stderr": 0.030588052974270655,
"acc_norm": 0.44528301886792454,
"acc_norm_stderr": 0.030588052974270655
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4652777777777778,
"acc_stderr": 0.04171115858181618,
"acc_norm": 0.4652777777777778,
"acc_norm_stderr": 0.04171115858181618
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4277456647398844,
"acc_stderr": 0.037724468575180255,
"acc_norm": 0.4277456647398844,
"acc_norm_stderr": 0.037724468575180255
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.19607843137254902,
"acc_stderr": 0.03950581861179963,
"acc_norm": 0.19607843137254902,
"acc_norm_stderr": 0.03950581861179963
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.38596491228070173,
"acc_stderr": 0.04579639422070434,
"acc_norm": 0.38596491228070173,
"acc_norm_stderr": 0.04579639422070434
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2751322751322751,
"acc_stderr": 0.023000086859068642,
"acc_norm": 0.2751322751322751,
"acc_norm_stderr": 0.023000086859068642
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.29365079365079366,
"acc_stderr": 0.040735243221471255,
"acc_norm": 0.29365079365079366,
"acc_norm_stderr": 0.040735243221471255
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5225806451612903,
"acc_stderr": 0.02841498501970786,
"acc_norm": 0.5225806451612903,
"acc_norm_stderr": 0.02841498501970786
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3645320197044335,
"acc_stderr": 0.033864057460620905,
"acc_norm": 0.3645320197044335,
"acc_norm_stderr": 0.033864057460620905
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.593939393939394,
"acc_stderr": 0.03834816355401181,
"acc_norm": 0.593939393939394,
"acc_norm_stderr": 0.03834816355401181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5404040404040404,
"acc_stderr": 0.035507024651313425,
"acc_norm": 0.5404040404040404,
"acc_norm_stderr": 0.035507024651313425
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7046632124352331,
"acc_stderr": 0.03292296639155141,
"acc_norm": 0.7046632124352331,
"acc_norm_stderr": 0.03292296639155141
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.44358974358974357,
"acc_stderr": 0.025189149894764198,
"acc_norm": 0.44358974358974357,
"acc_norm_stderr": 0.025189149894764198
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.02742001935094528,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.02742001935094528
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.41596638655462187,
"acc_stderr": 0.03201650100739615,
"acc_norm": 0.41596638655462187,
"acc_norm_stderr": 0.03201650100739615
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943342,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943342
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.653211009174312,
"acc_stderr": 0.020406097104093027,
"acc_norm": 0.653211009174312,
"acc_norm_stderr": 0.020406097104093027
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3055555555555556,
"acc_stderr": 0.03141554629402544,
"acc_norm": 0.3055555555555556,
"acc_norm_stderr": 0.03141554629402544
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5784313725490197,
"acc_stderr": 0.03465868196380762,
"acc_norm": 0.5784313725490197,
"acc_norm_stderr": 0.03465868196380762
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6624472573839663,
"acc_stderr": 0.03078154910202622,
"acc_norm": 0.6624472573839663,
"acc_norm_stderr": 0.03078154910202622
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5515695067264574,
"acc_stderr": 0.03337883736255098,
"acc_norm": 0.5515695067264574,
"acc_norm_stderr": 0.03337883736255098
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5648854961832062,
"acc_stderr": 0.043482080516448585,
"acc_norm": 0.5648854961832062,
"acc_norm_stderr": 0.043482080516448585
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6776859504132231,
"acc_stderr": 0.04266416363352168,
"acc_norm": 0.6776859504132231,
"acc_norm_stderr": 0.04266416363352168
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.04812917324536824,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.04812917324536824
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.50920245398773,
"acc_stderr": 0.03927705600787443,
"acc_norm": 0.50920245398773,
"acc_norm_stderr": 0.03927705600787443
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.39285714285714285,
"acc_stderr": 0.046355501356099754,
"acc_norm": 0.39285714285714285,
"acc_norm_stderr": 0.046355501356099754
},
"harness|hendrycksTest-management|5": {
"acc": 0.5631067961165048,
"acc_stderr": 0.04911147107365777,
"acc_norm": 0.5631067961165048,
"acc_norm_stderr": 0.04911147107365777
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.688034188034188,
"acc_stderr": 0.03035152732334493,
"acc_norm": 0.688034188034188,
"acc_norm_stderr": 0.03035152732334493
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.55,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.55,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6679438058748404,
"acc_stderr": 0.01684117465529572,
"acc_norm": 0.6679438058748404,
"acc_norm_stderr": 0.01684117465529572
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4913294797687861,
"acc_stderr": 0.026915047355369794,
"acc_norm": 0.4913294797687861,
"acc_norm_stderr": 0.026915047355369794
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217892,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217892
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5098039215686274,
"acc_stderr": 0.028624412550167958,
"acc_norm": 0.5098039215686274,
"acc_norm_stderr": 0.028624412550167958
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.594855305466238,
"acc_stderr": 0.027882383791325956,
"acc_norm": 0.594855305466238,
"acc_norm_stderr": 0.027882383791325956
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.027744313443376536,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.027744313443376536
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3829787234042553,
"acc_stderr": 0.02899908090480618,
"acc_norm": 0.3829787234042553,
"acc_norm_stderr": 0.02899908090480618
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.34159061277705344,
"acc_stderr": 0.012112391320842849,
"acc_norm": 0.34159061277705344,
"acc_norm_stderr": 0.012112391320842849
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5147058823529411,
"acc_stderr": 0.03035969707904612,
"acc_norm": 0.5147058823529411,
"acc_norm_stderr": 0.03035969707904612
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4526143790849673,
"acc_stderr": 0.020136790918492537,
"acc_norm": 0.4526143790849673,
"acc_norm_stderr": 0.020136790918492537
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5272727272727272,
"acc_stderr": 0.04782001791380061,
"acc_norm": 0.5272727272727272,
"acc_norm_stderr": 0.04782001791380061
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5265306122448979,
"acc_stderr": 0.03196412734523272,
"acc_norm": 0.5265306122448979,
"acc_norm_stderr": 0.03196412734523272
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6417910447761194,
"acc_stderr": 0.03390393042268814,
"acc_norm": 0.6417910447761194,
"acc_norm_stderr": 0.03390393042268814
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3795180722891566,
"acc_stderr": 0.03777798822748017,
"acc_norm": 0.3795180722891566,
"acc_norm_stderr": 0.03777798822748017
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7251461988304093,
"acc_stderr": 0.03424042924691584,
"acc_norm": 0.7251461988304093,
"acc_norm_stderr": 0.03424042924691584
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2460220318237454,
"mc1_stderr": 0.01507721920066259,
"mc2": 0.38170797049642685,
"mc2_stderr": 0.013464482874681617
},
"harness|winogrande|5": {
"acc": 0.7403314917127072,
"acc_stderr": 0.012322700705552666
},
"harness|gsm8k|5": {
"acc": 0.14329037149355572,
"acc_stderr": 0.00965089572335757
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Wanfq__FuseLLM-7B | [
"region:us"
] | 2024-02-01T16:50:33+00:00 | {"pretty_name": "Evaluation run of Wanfq/FuseLLM-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Wanfq/FuseLLM-7B](https://huggingface.co/Wanfq/FuseLLM-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Wanfq__FuseLLM-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T16:48:44.963342](https://huggingface.co/datasets/open-llm-leaderboard/details_Wanfq__FuseLLM-7B/blob/main/results_2024-02-01T16-48-44.963342.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4801705727969372,\n \"acc_stderr\": 0.034512318476802376,\n \"acc_norm\": 0.48541393567248253,\n \"acc_norm_stderr\": 0.035295245059963676,\n \"mc1\": 0.2460220318237454,\n \"mc1_stderr\": 0.01507721920066259,\n \"mc2\": 0.38170797049642685,\n \"mc2_stderr\": 0.013464482874681617\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4991467576791809,\n \"acc_stderr\": 0.014611369529813272,\n \"acc_norm\": 0.5324232081911263,\n \"acc_norm_stderr\": 0.01458063756999542\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5878311093407688,\n \"acc_stderr\": 0.004912192800263312,\n \"acc_norm\": 0.7871937860983867,\n \"acc_norm_stderr\": 0.00408455264190366\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04063302731486671,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04063302731486671\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.44528301886792454,\n \"acc_stderr\": 0.030588052974270655,\n \"acc_norm\": 0.44528301886792454,\n \"acc_norm_stderr\": 0.030588052974270655\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4652777777777778,\n \"acc_stderr\": 0.04171115858181618,\n \"acc_norm\": 0.4652777777777778,\n \"acc_norm_stderr\": 0.04171115858181618\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4277456647398844,\n \"acc_stderr\": 0.037724468575180255,\n \"acc_norm\": 0.4277456647398844,\n \"acc_norm_stderr\": 0.037724468575180255\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.19607843137254902,\n \"acc_stderr\": 0.03950581861179963,\n \"acc_norm\": 0.19607843137254902,\n \"acc_norm_stderr\": 0.03950581861179963\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.425531914893617,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.38596491228070173,\n \"acc_stderr\": 0.04579639422070434,\n \"acc_norm\": 0.38596491228070173,\n \"acc_norm_stderr\": 0.04579639422070434\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2751322751322751,\n \"acc_stderr\": 0.023000086859068642,\n \"acc_norm\": 0.2751322751322751,\n \"acc_norm_stderr\": 0.023000086859068642\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.040735243221471255,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.040735243221471255\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5225806451612903,\n \"acc_stderr\": 0.02841498501970786,\n \"acc_norm\": 0.5225806451612903,\n \"acc_norm_stderr\": 0.02841498501970786\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3645320197044335,\n \"acc_stderr\": 0.033864057460620905,\n \"acc_norm\": 0.3645320197044335,\n \"acc_norm_stderr\": 0.033864057460620905\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.593939393939394,\n \"acc_stderr\": 0.03834816355401181,\n \"acc_norm\": 0.593939393939394,\n \"acc_norm_stderr\": 0.03834816355401181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5404040404040404,\n \"acc_stderr\": 0.035507024651313425,\n \"acc_norm\": 0.5404040404040404,\n \"acc_norm_stderr\": 0.035507024651313425\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7046632124352331,\n \"acc_stderr\": 0.03292296639155141,\n \"acc_norm\": 0.7046632124352331,\n \"acc_norm_stderr\": 0.03292296639155141\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.44358974358974357,\n \"acc_stderr\": 0.025189149894764198,\n \"acc_norm\": 0.44358974358974357,\n \"acc_norm_stderr\": 0.025189149894764198\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.02742001935094528,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.02742001935094528\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.41596638655462187,\n \"acc_stderr\": 0.03201650100739615,\n \"acc_norm\": 0.41596638655462187,\n \"acc_norm_stderr\": 0.03201650100739615\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.304635761589404,\n \"acc_stderr\": 0.03757949922943342,\n \"acc_norm\": 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943342\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.653211009174312,\n \"acc_stderr\": 0.020406097104093027,\n \"acc_norm\": 0.653211009174312,\n \"acc_norm_stderr\": 0.020406097104093027\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.3055555555555556,\n \"acc_stderr\": 0.03141554629402544,\n \"acc_norm\": 0.3055555555555556,\n \"acc_norm_stderr\": 0.03141554629402544\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5784313725490197,\n \"acc_stderr\": 0.03465868196380762,\n \"acc_norm\": 0.5784313725490197,\n \"acc_norm_stderr\": 0.03465868196380762\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6624472573839663,\n \"acc_stderr\": 0.03078154910202622,\n \"acc_norm\": 0.6624472573839663,\n \"acc_norm_stderr\": 0.03078154910202622\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5515695067264574,\n \"acc_stderr\": 0.03337883736255098,\n \"acc_norm\": 0.5515695067264574,\n \"acc_norm_stderr\": 0.03337883736255098\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.043482080516448585,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.043482080516448585\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6776859504132231,\n \"acc_stderr\": 0.04266416363352168,\n \"acc_norm\": 0.6776859504132231,\n \"acc_norm_stderr\": 0.04266416363352168\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.04812917324536824,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.04812917324536824\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.50920245398773,\n \"acc_stderr\": 0.03927705600787443,\n \"acc_norm\": 0.50920245398773,\n \"acc_norm_stderr\": 0.03927705600787443\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n \"acc_stderr\": 0.046355501356099754,\n \"acc_norm\": 0.39285714285714285,\n \"acc_norm_stderr\": 0.046355501356099754\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.5631067961165048,\n \"acc_stderr\": 0.04911147107365777,\n \"acc_norm\": 0.5631067961165048,\n \"acc_norm_stderr\": 0.04911147107365777\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.688034188034188,\n \"acc_stderr\": 0.03035152732334493,\n \"acc_norm\": 0.688034188034188,\n \"acc_norm_stderr\": 0.03035152732334493\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.04999999999999999,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.04999999999999999\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6679438058748404,\n \"acc_stderr\": 0.01684117465529572,\n \"acc_norm\": 0.6679438058748404,\n \"acc_norm_stderr\": 0.01684117465529572\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4913294797687861,\n \"acc_stderr\": 0.026915047355369794,\n \"acc_norm\": 0.4913294797687861,\n \"acc_norm_stderr\": 0.026915047355369794\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217892,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217892\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5098039215686274,\n \"acc_stderr\": 0.028624412550167958,\n \"acc_norm\": 0.5098039215686274,\n \"acc_norm_stderr\": 0.028624412550167958\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.594855305466238,\n \"acc_stderr\": 0.027882383791325956,\n \"acc_norm\": 0.594855305466238,\n \"acc_norm_stderr\": 0.027882383791325956\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.027744313443376536,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.027744313443376536\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3829787234042553,\n \"acc_stderr\": 0.02899908090480618,\n \"acc_norm\": 0.3829787234042553,\n \"acc_norm_stderr\": 0.02899908090480618\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.34159061277705344,\n \"acc_stderr\": 0.012112391320842849,\n \"acc_norm\": 0.34159061277705344,\n \"acc_norm_stderr\": 0.012112391320842849\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5147058823529411,\n \"acc_stderr\": 0.03035969707904612,\n \"acc_norm\": 0.5147058823529411,\n \"acc_norm_stderr\": 0.03035969707904612\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4526143790849673,\n \"acc_stderr\": 0.020136790918492537,\n \"acc_norm\": 0.4526143790849673,\n \"acc_norm_stderr\": 0.020136790918492537\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5272727272727272,\n \"acc_stderr\": 0.04782001791380061,\n \"acc_norm\": 0.5272727272727272,\n \"acc_norm_stderr\": 0.04782001791380061\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5265306122448979,\n \"acc_stderr\": 0.03196412734523272,\n \"acc_norm\": 0.5265306122448979,\n \"acc_norm_stderr\": 0.03196412734523272\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6417910447761194,\n \"acc_stderr\": 0.03390393042268814,\n \"acc_norm\": 0.6417910447761194,\n \"acc_norm_stderr\": 0.03390393042268814\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3795180722891566,\n \"acc_stderr\": 0.03777798822748017,\n \"acc_norm\": 0.3795180722891566,\n \"acc_norm_stderr\": 0.03777798822748017\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7251461988304093,\n \"acc_stderr\": 0.03424042924691584,\n \"acc_norm\": 0.7251461988304093,\n \"acc_norm_stderr\": 0.03424042924691584\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2460220318237454,\n \"mc1_stderr\": 0.01507721920066259,\n \"mc2\": 0.38170797049642685,\n \"mc2_stderr\": 0.013464482874681617\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7403314917127072,\n \"acc_stderr\": 0.012322700705552666\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14329037149355572,\n \"acc_stderr\": 0.00965089572335757\n }\n}\n```", "repo_url": "https://huggingface.co/Wanfq/FuseLLM-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-44.963342.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["**/details_harness|winogrande|5_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T16-48-44.963342.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T16_48_44.963342", "path": ["results_2024-02-01T16-48-44.963342.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T16-48-44.963342.parquet"]}]}]} | 2024-02-01T16:50:57+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Wanfq/FuseLLM-7B
Dataset automatically created during the evaluation run of model Wanfq/FuseLLM-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T16:48:44.963342(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Wanfq/FuseLLM-7B\n\n\n\nDataset automatically created during the evaluation run of model Wanfq/FuseLLM-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:48:44.963342(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Wanfq/FuseLLM-7B\n\n\n\nDataset automatically created during the evaluation run of model Wanfq/FuseLLM-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:48:44.963342(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
d664c8b26ad2d34450bc3615c0bad8745fdd5ff4 |
# Dataset Card for Evaluation run of DreadPoor/Bageluccine-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [DreadPoor/Bageluccine-7B-slerp](https://huggingface.co/DreadPoor/Bageluccine-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_DreadPoor__Bageluccine-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T16:48:28.503122](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__Bageluccine-7B-slerp/blob/main/results_2024-02-01T16-48-28.503122.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6181945205482396,
"acc_stderr": 0.03275946388476514,
"acc_norm": 0.6220653269951615,
"acc_norm_stderr": 0.03341254290856197,
"mc1": 0.44063647490820074,
"mc1_stderr": 0.017379697555437446,
"mc2": 0.6033106033094725,
"mc2_stderr": 0.01574767156250882
},
"harness|arc:challenge|25": {
"acc": 0.6075085324232082,
"acc_stderr": 0.014269634635670726,
"acc_norm": 0.6510238907849829,
"acc_norm_stderr": 0.013928933461382506
},
"harness|hellaswag|10": {
"acc": 0.6662019518024298,
"acc_stderr": 0.004706048116764936,
"acc_norm": 0.8506273650667198,
"acc_norm_stderr": 0.0035572690393421745
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.03761070869867479,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.03761070869867479
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7083333333333334,
"acc_stderr": 0.038009680605548594,
"acc_norm": 0.7083333333333334,
"acc_norm_stderr": 0.038009680605548594
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.04655010411319616,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.04655010411319616
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.032650194750335815,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.032650194750335815
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6,
"acc_stderr": 0.040824829046386284,
"acc_norm": 0.6,
"acc_norm_stderr": 0.040824829046386284
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.02530590624159063,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.02530590624159063
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5645161290322581,
"acc_stderr": 0.028206225591502737,
"acc_norm": 0.5645161290322581,
"acc_norm_stderr": 0.028206225591502737
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.032876667586034906,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.032876667586034906
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.02541634309630644,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.02541634309630644
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5974358974358974,
"acc_stderr": 0.02486499515976775,
"acc_norm": 0.5974358974358974,
"acc_norm_stderr": 0.02486499515976775
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131143,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131143
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.03077805742293167,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.03077805742293167
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8256880733944955,
"acc_stderr": 0.016265675632010354,
"acc_norm": 0.8256880733944955,
"acc_norm_stderr": 0.016265675632010354
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7892156862745098,
"acc_stderr": 0.02862654791243741,
"acc_norm": 0.7892156862745098,
"acc_norm_stderr": 0.02862654791243741
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6412556053811659,
"acc_stderr": 0.03219079200419996,
"acc_norm": 0.6412556053811659,
"acc_norm_stderr": 0.03219079200419996
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.03457272836917669,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.03457272836917669
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.043300437496507416,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.043300437496507416
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.04058042015646034,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.04058042015646034
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.019875655027867443,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.019875655027867443
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7994891443167306,
"acc_stderr": 0.014317653708594204,
"acc_norm": 0.7994891443167306,
"acc_norm_stderr": 0.014317653708594204
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6907514450867052,
"acc_stderr": 0.02488314057007176,
"acc_norm": 0.6907514450867052,
"acc_norm_stderr": 0.02488314057007176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4122905027932961,
"acc_stderr": 0.01646320023811452,
"acc_norm": 0.4122905027932961,
"acc_norm_stderr": 0.01646320023811452
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.025917806117147158,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.025917806117147158
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.026311858071854155,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.026311858071854155
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7160493827160493,
"acc_stderr": 0.025089478523765137,
"acc_norm": 0.7160493827160493,
"acc_norm_stderr": 0.025089478523765137
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.02977945095730307,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.02977945095730307
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45241199478487615,
"acc_stderr": 0.012712265105889136,
"acc_norm": 0.45241199478487615,
"acc_norm_stderr": 0.012712265105889136
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6507352941176471,
"acc_stderr": 0.028959755196824866,
"acc_norm": 0.6507352941176471,
"acc_norm_stderr": 0.028959755196824866
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6601307189542484,
"acc_stderr": 0.019162418588623553,
"acc_norm": 0.6601307189542484,
"acc_norm_stderr": 0.019162418588623553
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.02783302387139969,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.02783302387139969
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6368159203980099,
"acc_stderr": 0.03400598505599015,
"acc_norm": 0.6368159203980099,
"acc_norm_stderr": 0.03400598505599015
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.44063647490820074,
"mc1_stderr": 0.017379697555437446,
"mc2": 0.6033106033094725,
"mc2_stderr": 0.01574767156250882
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698332
},
"harness|gsm8k|5": {
"acc": 0.4624715693707354,
"acc_stderr": 0.013733636059107756
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_DreadPoor__Bageluccine-7B-slerp | [
"region:us"
] | 2024-02-01T16:50:53+00:00 | {"pretty_name": "Evaluation run of DreadPoor/Bageluccine-7B-slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [DreadPoor/Bageluccine-7B-slerp](https://huggingface.co/DreadPoor/Bageluccine-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_DreadPoor__Bageluccine-7B-slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T16:48:28.503122](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__Bageluccine-7B-slerp/blob/main/results_2024-02-01T16-48-28.503122.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6181945205482396,\n \"acc_stderr\": 0.03275946388476514,\n \"acc_norm\": 0.6220653269951615,\n \"acc_norm_stderr\": 0.03341254290856197,\n \"mc1\": 0.44063647490820074,\n \"mc1_stderr\": 0.017379697555437446,\n \"mc2\": 0.6033106033094725,\n \"mc2_stderr\": 0.01574767156250882\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6075085324232082,\n \"acc_stderr\": 0.014269634635670726,\n \"acc_norm\": 0.6510238907849829,\n \"acc_norm_stderr\": 0.013928933461382506\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6662019518024298,\n \"acc_stderr\": 0.004706048116764936,\n \"acc_norm\": 0.8506273650667198,\n \"acc_norm_stderr\": 0.0035572690393421745\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.03761070869867479,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.03761070869867479\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.04655010411319616,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.04655010411319616\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.032650194750335815,\n \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.032650194750335815\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.040824829046386284,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.040824829046386284\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.02530590624159063,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.02530590624159063\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5645161290322581,\n \"acc_stderr\": 0.028206225591502737,\n \"acc_norm\": 0.5645161290322581,\n \"acc_norm_stderr\": 0.028206225591502737\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.032876667586034906,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.032876667586034906\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.02541634309630644,\n \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.02541634309630644\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5974358974358974,\n \"acc_stderr\": 0.02486499515976775,\n \"acc_norm\": 0.5974358974358974,\n \"acc_norm_stderr\": 0.02486499515976775\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.03077805742293167,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.03077805742293167\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8256880733944955,\n \"acc_stderr\": 0.016265675632010354,\n \"acc_norm\": 0.8256880733944955,\n \"acc_norm_stderr\": 0.016265675632010354\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.02862654791243741,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.02862654791243741\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6412556053811659,\n \"acc_stderr\": 0.03219079200419996,\n \"acc_norm\": 0.6412556053811659,\n \"acc_norm_stderr\": 0.03219079200419996\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8264462809917356,\n \"acc_stderr\": 0.03457272836917669,\n \"acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.03457272836917669\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.043300437496507416,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.043300437496507416\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.04058042015646034,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.04058042015646034\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n \"acc_stderr\": 0.019875655027867443,\n \"acc_norm\": 0.8974358974358975,\n \"acc_norm_stderr\": 0.019875655027867443\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7994891443167306,\n \"acc_stderr\": 0.014317653708594204,\n \"acc_norm\": 0.7994891443167306,\n \"acc_norm_stderr\": 0.014317653708594204\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6907514450867052,\n \"acc_stderr\": 0.02488314057007176,\n \"acc_norm\": 0.6907514450867052,\n \"acc_norm_stderr\": 0.02488314057007176\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4122905027932961,\n \"acc_stderr\": 0.01646320023811452,\n \"acc_norm\": 0.4122905027932961,\n \"acc_norm_stderr\": 0.01646320023811452\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.025917806117147158,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.025917806117147158\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n \"acc_stderr\": 0.026311858071854155,\n \"acc_norm\": 0.6881028938906752,\n \"acc_norm_stderr\": 0.026311858071854155\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.025089478523765137,\n \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.025089478523765137\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.02977945095730307,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.02977945095730307\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45241199478487615,\n \"acc_stderr\": 0.012712265105889136,\n \"acc_norm\": 0.45241199478487615,\n \"acc_norm_stderr\": 0.012712265105889136\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6507352941176471,\n \"acc_stderr\": 0.028959755196824866,\n \"acc_norm\": 0.6507352941176471,\n \"acc_norm_stderr\": 0.028959755196824866\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.019162418588623553,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.019162418588623553\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.02783302387139969,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.02783302387139969\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6368159203980099,\n \"acc_stderr\": 0.03400598505599015,\n \"acc_norm\": 0.6368159203980099,\n \"acc_norm_stderr\": 0.03400598505599015\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.44063647490820074,\n \"mc1_stderr\": 0.017379697555437446,\n \"mc2\": 0.6033106033094725,\n \"mc2_stderr\": 0.01574767156250882\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698332\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4624715693707354,\n \"acc_stderr\": 0.013733636059107756\n }\n}\n```", "repo_url": "https://huggingface.co/DreadPoor/Bageluccine-7B-slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-28.503122.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["**/details_harness|winogrande|5_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T16-48-28.503122.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T16_48_28.503122", "path": ["results_2024-02-01T16-48-28.503122.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T16-48-28.503122.parquet"]}]}]} | 2024-02-01T16:51:18+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of DreadPoor/Bageluccine-7B-slerp
Dataset automatically created during the evaluation run of model DreadPoor/Bageluccine-7B-slerp on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T16:48:28.503122(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of DreadPoor/Bageluccine-7B-slerp\n\n\n\nDataset automatically created during the evaluation run of model DreadPoor/Bageluccine-7B-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:48:28.503122(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of DreadPoor/Bageluccine-7B-slerp\n\n\n\nDataset automatically created during the evaluation run of model DreadPoor/Bageluccine-7B-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:48:28.503122(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
ccb9c1522d74c7898bb2a08eabe7d7de9b7dc8bc |
# Dataset Card for Evaluation run of YKM12/Mistral-7B-summ-privatev1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [YKM12/Mistral-7B-summ-privatev1](https://huggingface.co/YKM12/Mistral-7B-summ-privatev1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_YKM12__Mistral-7B-summ-privatev1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T16:51:47.124175](https://huggingface.co/datasets/open-llm-leaderboard/details_YKM12__Mistral-7B-summ-privatev1/blob/main/results_2024-02-01T16-51-47.124175.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6561198532911652,
"acc_stderr": 0.03198692258775897,
"acc_norm": 0.65546725426879,
"acc_norm_stderr": 0.03265800632201962,
"mc1": 0.5826193390452876,
"mc1_stderr": 0.017262891063272164,
"mc2": 0.7188545613118653,
"mc2_stderr": 0.01474731238488106
},
"harness|arc:challenge|25": {
"acc": 0.7090443686006825,
"acc_stderr": 0.013273077865907588,
"acc_norm": 0.7414675767918089,
"acc_norm_stderr": 0.012794553754288694
},
"harness|hellaswag|10": {
"acc": 0.716391157140012,
"acc_stderr": 0.004498280244494495,
"acc_norm": 0.8884684325831508,
"acc_norm_stderr": 0.0031414591751392712
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.674074074074074,
"acc_stderr": 0.040491220417025055,
"acc_norm": 0.674074074074074,
"acc_norm_stderr": 0.040491220417025055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.037385206761196686,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.037385206761196686
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.028049186315695255,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.028049186315695255
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5158730158730159,
"acc_stderr": 0.044698818540726076,
"acc_norm": 0.5158730158730159,
"acc_norm_stderr": 0.044698818540726076
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8,
"acc_stderr": 0.022755204959542946,
"acc_norm": 0.8,
"acc_norm_stderr": 0.022755204959542946
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.0274796030105388,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.0274796030105388
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6692307692307692,
"acc_stderr": 0.023854795680971125,
"acc_norm": 0.6692307692307692,
"acc_norm_stderr": 0.023854795680971125
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621126,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621126
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.0335195387952127,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.0335195387952127
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993466,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993466
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258176,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4424581005586592,
"acc_stderr": 0.016611393687268584,
"acc_norm": 0.4424581005586592,
"acc_norm_stderr": 0.016611393687268584
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.025261691219729484,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.025261691219729484
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188933,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188933
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042107,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.02982074719142248,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.02982074719142248
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47196870925684486,
"acc_stderr": 0.012750151802922435,
"acc_norm": 0.47196870925684486,
"acc_norm_stderr": 0.012750151802922435
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.028501452860396553,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.028501452860396553
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6650326797385621,
"acc_stderr": 0.019094228167000328,
"acc_norm": 0.6650326797385621,
"acc_norm_stderr": 0.019094228167000328
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5826193390452876,
"mc1_stderr": 0.017262891063272164,
"mc2": 0.7188545613118653,
"mc2_stderr": 0.01474731238488106
},
"harness|winogrande|5": {
"acc": 0.8531965272296764,
"acc_stderr": 0.009946627440250676
},
"harness|gsm8k|5": {
"acc": 0.7020470053070508,
"acc_stderr": 0.01259793223291453
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_YKM12__Mistral-7B-summ-privatev1 | [
"region:us"
] | 2024-02-01T16:54:06+00:00 | {"pretty_name": "Evaluation run of YKM12/Mistral-7B-summ-privatev1", "dataset_summary": "Dataset automatically created during the evaluation run of model [YKM12/Mistral-7B-summ-privatev1](https://huggingface.co/YKM12/Mistral-7B-summ-privatev1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_YKM12__Mistral-7B-summ-privatev1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T16:51:47.124175](https://huggingface.co/datasets/open-llm-leaderboard/details_YKM12__Mistral-7B-summ-privatev1/blob/main/results_2024-02-01T16-51-47.124175.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6561198532911652,\n \"acc_stderr\": 0.03198692258775897,\n \"acc_norm\": 0.65546725426879,\n \"acc_norm_stderr\": 0.03265800632201962,\n \"mc1\": 0.5826193390452876,\n \"mc1_stderr\": 0.017262891063272164,\n \"mc2\": 0.7188545613118653,\n \"mc2_stderr\": 0.01474731238488106\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7090443686006825,\n \"acc_stderr\": 0.013273077865907588,\n \"acc_norm\": 0.7414675767918089,\n \"acc_norm_stderr\": 0.012794553754288694\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.716391157140012,\n \"acc_stderr\": 0.004498280244494495,\n \"acc_norm\": 0.8884684325831508,\n \"acc_norm_stderr\": 0.0031414591751392712\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.674074074074074,\n \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.674074074074074,\n \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.028049186315695255,\n \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.028049186315695255\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5158730158730159,\n \"acc_stderr\": 0.044698818540726076,\n \"acc_norm\": 0.5158730158730159,\n \"acc_norm_stderr\": 0.044698818540726076\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.022755204959542946,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.022755204959542946\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.0274796030105388,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.0274796030105388\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.023854795680971125,\n \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.023854795680971125\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5138888888888888,\n \"acc_stderr\": 0.03408655867977749,\n \"acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.03408655867977749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621126,\n \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621126\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n \"acc_stderr\": 0.013625556907993466,\n \"acc_norm\": 0.8237547892720306,\n \"acc_norm_stderr\": 0.013625556907993466\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258176,\n \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258176\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4424581005586592,\n \"acc_stderr\": 0.016611393687268584,\n \"acc_norm\": 0.4424581005586592,\n \"acc_norm_stderr\": 0.016611393687268584\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.025261691219729484,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.025261691219729484\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n \"acc_stderr\": 0.025670259242188933,\n \"acc_norm\": 0.7138263665594855,\n \"acc_norm_stderr\": 0.025670259242188933\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042107,\n \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042107\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.02982074719142248,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.02982074719142248\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47196870925684486,\n \"acc_stderr\": 0.012750151802922435,\n \"acc_norm\": 0.47196870925684486,\n \"acc_norm_stderr\": 0.012750151802922435\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396553,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396553\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6650326797385621,\n \"acc_stderr\": 0.019094228167000328,\n \"acc_norm\": 0.6650326797385621,\n \"acc_norm_stderr\": 0.019094228167000328\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5826193390452876,\n \"mc1_stderr\": 0.017262891063272164,\n \"mc2\": 0.7188545613118653,\n \"mc2_stderr\": 0.01474731238488106\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8531965272296764,\n \"acc_stderr\": 0.009946627440250676\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7020470053070508,\n \"acc_stderr\": 0.01259793223291453\n }\n}\n```", "repo_url": "https://huggingface.co/YKM12/Mistral-7B-summ-privatev1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T16-51-47.124175.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["**/details_harness|winogrande|5_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T16-51-47.124175.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T16_51_47.124175", "path": ["results_2024-02-01T16-51-47.124175.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T16-51-47.124175.parquet"]}]}]} | 2024-02-01T16:54:28+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of YKM12/Mistral-7B-summ-privatev1
Dataset automatically created during the evaluation run of model YKM12/Mistral-7B-summ-privatev1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T16:51:47.124175(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of YKM12/Mistral-7B-summ-privatev1\n\n\n\nDataset automatically created during the evaluation run of model YKM12/Mistral-7B-summ-privatev1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:51:47.124175(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of YKM12/Mistral-7B-summ-privatev1\n\n\n\nDataset automatically created during the evaluation run of model YKM12/Mistral-7B-summ-privatev1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T16:51:47.124175(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
c7dcded467dfe2225e6372e68adb6ccba1672b8e |
# Dataset Card for Evaluation run of BlouseJury/Mistral-7B-Discord-0.1-DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BlouseJury/Mistral-7B-Discord-0.1-DPO](https://huggingface.co/BlouseJury/Mistral-7B-Discord-0.1-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.1-DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:00:23.691484](https://huggingface.co/datasets/open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.1-DPO/blob/main/results_2024-02-01T17-00-23.691484.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6233951979610222,
"acc_stderr": 0.032739541113838276,
"acc_norm": 0.6297936917040108,
"acc_norm_stderr": 0.033418584464628434,
"mc1": 0.38922888616891066,
"mc1_stderr": 0.017068552680690328,
"mc2": 0.5527536910345616,
"mc2_stderr": 0.015269414074864143
},
"harness|arc:challenge|25": {
"acc": 0.6015358361774744,
"acc_stderr": 0.014306946052735563,
"acc_norm": 0.6322525597269625,
"acc_norm_stderr": 0.014090995618168484
},
"harness|hellaswag|10": {
"acc": 0.6394144592710616,
"acc_stderr": 0.004791890625834196,
"acc_norm": 0.8327026488747261,
"acc_norm_stderr": 0.003724783389253327
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6513157894736842,
"acc_stderr": 0.0387813988879761,
"acc_norm": 0.6513157894736842,
"acc_norm_stderr": 0.0387813988879761
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.02863723563980089,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.02863723563980089
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5953757225433526,
"acc_stderr": 0.03742461193887249,
"acc_norm": 0.5953757225433526,
"acc_norm_stderr": 0.03742461193887249
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04444444444444449,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04444444444444449
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7548387096774194,
"acc_stderr": 0.02447224384089553,
"acc_norm": 0.7548387096774194,
"acc_norm_stderr": 0.02447224384089553
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.035179450386910616,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.035179450386910616
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7393939393939394,
"acc_stderr": 0.034277431758165236,
"acc_norm": 0.7393939393939394,
"acc_norm_stderr": 0.034277431758165236
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.803030303030303,
"acc_stderr": 0.028335609732463362,
"acc_norm": 0.803030303030303,
"acc_norm_stderr": 0.028335609732463362
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.02503387058301518,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.02503387058301518
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6333333333333333,
"acc_stderr": 0.02443301646605246,
"acc_norm": 0.6333333333333333,
"acc_norm_stderr": 0.02443301646605246
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.028133252578815642,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.028133252578815642
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.031041941304059278,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.031041941304059278
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8165137614678899,
"acc_stderr": 0.016595259710399327,
"acc_norm": 0.8165137614678899,
"acc_norm_stderr": 0.016595259710399327
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5509259259259259,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.5509259259259259,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.02759917430064076,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.02759917430064076
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.026750826994676173,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.026750826994676173
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776678,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776678
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.041331194402438376,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.041331194402438376
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.38392857142857145,
"acc_stderr": 0.04616143075028547,
"acc_norm": 0.38392857142857145,
"acc_norm_stderr": 0.04616143075028547
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8109833971902938,
"acc_stderr": 0.014000791294407003,
"acc_norm": 0.8109833971902938,
"acc_norm_stderr": 0.014000791294407003
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388676992,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388676992
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3396648044692737,
"acc_stderr": 0.015839400406212505,
"acc_norm": 0.3396648044692737,
"acc_norm_stderr": 0.015839400406212505
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7283950617283951,
"acc_stderr": 0.02474862449053738,
"acc_norm": 0.7283950617283951,
"acc_norm_stderr": 0.02474862449053738
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.439374185136897,
"acc_stderr": 0.012676014778580214,
"acc_norm": 0.439374185136897,
"acc_norm_stderr": 0.012676014778580214
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.028418208619406755,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.028418208619406755
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.019488025745529675,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.019488025745529675
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.0289205832206756,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.0289205832206756
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233264,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233264
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.029913127232368036,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.029913127232368036
},
"harness|truthfulqa:mc|0": {
"mc1": 0.38922888616891066,
"mc1_stderr": 0.017068552680690328,
"mc2": 0.5527536910345616,
"mc2_stderr": 0.015269414074864143
},
"harness|winogrande|5": {
"acc": 0.7892659826361483,
"acc_stderr": 0.011462046419710681
},
"harness|gsm8k|5": {
"acc": 0.30401819560272936,
"acc_stderr": 0.012670420440198654
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.1-DPO | [
"region:us"
] | 2024-02-01T17:02:37+00:00 | {"pretty_name": "Evaluation run of BlouseJury/Mistral-7B-Discord-0.1-DPO", "dataset_summary": "Dataset automatically created during the evaluation run of model [BlouseJury/Mistral-7B-Discord-0.1-DPO](https://huggingface.co/BlouseJury/Mistral-7B-Discord-0.1-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.1-DPO\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:00:23.691484](https://huggingface.co/datasets/open-llm-leaderboard/details_BlouseJury__Mistral-7B-Discord-0.1-DPO/blob/main/results_2024-02-01T17-00-23.691484.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6233951979610222,\n \"acc_stderr\": 0.032739541113838276,\n \"acc_norm\": 0.6297936917040108,\n \"acc_norm_stderr\": 0.033418584464628434,\n \"mc1\": 0.38922888616891066,\n \"mc1_stderr\": 0.017068552680690328,\n \"mc2\": 0.5527536910345616,\n \"mc2_stderr\": 0.015269414074864143\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6015358361774744,\n \"acc_stderr\": 0.014306946052735563,\n \"acc_norm\": 0.6322525597269625,\n \"acc_norm_stderr\": 0.014090995618168484\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6394144592710616,\n \"acc_stderr\": 0.004791890625834196,\n \"acc_norm\": 0.8327026488747261,\n \"acc_norm_stderr\": 0.003724783389253327\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.0387813988879761,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.0387813988879761\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5953757225433526,\n \"acc_stderr\": 0.03742461193887249,\n \"acc_norm\": 0.5953757225433526,\n \"acc_norm_stderr\": 0.03742461193887249\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482758,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482758\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7548387096774194,\n \"acc_stderr\": 0.02447224384089553,\n \"acc_norm\": 0.7548387096774194,\n \"acc_norm_stderr\": 0.02447224384089553\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7393939393939394,\n \"acc_stderr\": 0.034277431758165236,\n \"acc_norm\": 0.7393939393939394,\n \"acc_norm_stderr\": 0.034277431758165236\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.02503387058301518,\n \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.02503387058301518\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6333333333333333,\n \"acc_stderr\": 0.02443301646605246,\n \"acc_norm\": 0.6333333333333333,\n \"acc_norm_stderr\": 0.02443301646605246\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815642,\n \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815642\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.031041941304059278,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.031041941304059278\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8165137614678899,\n \"acc_stderr\": 0.016595259710399327,\n \"acc_norm\": 0.8165137614678899,\n \"acc_norm_stderr\": 0.016595259710399327\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.03392238405321617,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.03392238405321617\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8088235294117647,\n \"acc_stderr\": 0.02759917430064076,\n \"acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.02759917430064076\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676173,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676173\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n \"acc_stderr\": 0.03160295143776678,\n \"acc_norm\": 0.6681614349775785,\n \"acc_norm_stderr\": 0.03160295143776678\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070417,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070417\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.041331194402438376,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.041331194402438376\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n \"acc_stderr\": 0.014000791294407003,\n \"acc_norm\": 0.8109833971902938,\n \"acc_norm_stderr\": 0.014000791294407003\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388676992,\n \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388676992\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3396648044692737,\n \"acc_stderr\": 0.015839400406212505,\n \"acc_norm\": 0.3396648044692737,\n \"acc_norm_stderr\": 0.015839400406212505\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7283950617283951,\n \"acc_stderr\": 0.02474862449053738,\n \"acc_norm\": 0.7283950617283951,\n \"acc_norm_stderr\": 0.02474862449053738\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.439374185136897,\n \"acc_stderr\": 0.012676014778580214,\n \"acc_norm\": 0.439374185136897,\n \"acc_norm_stderr\": 0.012676014778580214\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406755,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406755\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.019488025745529675,\n \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.019488025745529675\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.026508590656233264,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.026508590656233264\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.38922888616891066,\n \"mc1_stderr\": 0.017068552680690328,\n \"mc2\": 0.5527536910345616,\n \"mc2_stderr\": 0.015269414074864143\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7892659826361483,\n \"acc_stderr\": 0.011462046419710681\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.30401819560272936,\n \"acc_stderr\": 0.012670420440198654\n }\n}\n```", "repo_url": "https://huggingface.co/BlouseJury/Mistral-7B-Discord-0.1-DPO", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-00-23.691484.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["**/details_harness|winogrande|5_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-00-23.691484.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_00_23.691484", "path": ["results_2024-02-01T17-00-23.691484.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-00-23.691484.parquet"]}]}]} | 2024-02-01T17:03:07+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of BlouseJury/Mistral-7B-Discord-0.1-DPO
Dataset automatically created during the evaluation run of model BlouseJury/Mistral-7B-Discord-0.1-DPO on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:00:23.691484(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of BlouseJury/Mistral-7B-Discord-0.1-DPO\n\n\n\nDataset automatically created during the evaluation run of model BlouseJury/Mistral-7B-Discord-0.1-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:00:23.691484(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of BlouseJury/Mistral-7B-Discord-0.1-DPO\n\n\n\nDataset automatically created during the evaluation run of model BlouseJury/Mistral-7B-Discord-0.1-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:00:23.691484(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
da4016ff539dc9caa0cffcbd55e7af34ee7cda2b | # Dataset Card for "snips_test_valid_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/snips_test_valid_synth | [
"region:us"
] | 2024-02-01T17:06:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "original", "path": "data/original-*"}, {"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k_12bps", "path": "data/encodec_24k_12bps-*"}, {"split": "encodec_24k_1_5bps", "path": "data/encodec_24k_1_5bps-*"}, {"split": "encodec_24k_24bps", "path": "data/encodec_24k_24bps-*"}, {"split": "encodec_24k_3bps", "path": "data/encodec_24k_3bps-*"}, {"split": "encodec_24k_6bps", "path": "data/encodec_24k_6bps-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "original", "num_bytes": 2082708964.0, "num_examples": 22400}, {"name": "academicodec_hifi_16k_320d", "num_bytes": 2076522528.0, "num_examples": 22400}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 2076522528.0, "num_examples": 22400}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 3116940448.0, "num_examples": 22400}, {"name": "audiodec_24k_320d", "num_bytes": 3130702048.0, "num_examples": 22400}, {"name": "dac_16k", "num_bytes": 2083538330.0, "num_examples": 22400}, {"name": "dac_24k", "num_bytes": 3372733027.2, "num_examples": 22400}, {"name": "dac_44k", "num_bytes": 6195191491.2, "num_examples": 22400}, {"name": "encodec_24k_12bps", "num_bytes": 3372733027.2, "num_examples": 22400}, {"name": "encodec_24k_1_5bps", "num_bytes": 3372733027.2, "num_examples": 22400}, {"name": "encodec_24k_24bps", "num_bytes": 3372733027.2, "num_examples": 22400}, {"name": "encodec_24k_3bps", "num_bytes": 3372733027.2, "num_examples": 22400}, {"name": "encodec_24k_6bps", "num_bytes": 3372733027.2, "num_examples": 22400}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 2247635459.2, "num_examples": 22400}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 2247635459.2, "num_examples": 22400}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 2249360080.0, "num_examples": 22400}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 2249360080.0, "num_examples": 22400}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 2249360080.0, "num_examples": 22400}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 2249360080.0, "num_examples": 22400}, {"name": "speech_tokenizer_16k", "num_bytes": 2256298256.0, "num_examples": 22400}], "download_size": 53388830469, "dataset_size": 56747533994.799995}} | 2024-02-01T18:28:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "snips_test_valid_synth"
More Information needed | [
"# Dataset Card for \"snips_test_valid_synth\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"snips_test_valid_synth\"\n\nMore Information needed"
] |
282f685e168c0160212a324b34e2392e505783cb |
# Dataset Card for Evaluation run of FelixChao/WestSeverus-10.7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [FelixChao/WestSeverus-10.7B](https://huggingface.co/FelixChao/WestSeverus-10.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_FelixChao__WestSeverus-10.7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:05:37.081553](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__WestSeverus-10.7B/blob/main/results_2024-02-01T17-05-37.081553.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6541756570563236,
"acc_stderr": 0.032145025906151356,
"acc_norm": 0.6555840410982634,
"acc_norm_stderr": 0.0327981150369556,
"mc1": 0.554467564259486,
"mc1_stderr": 0.017399335280140343,
"mc2": 0.7230180462238235,
"mc2_stderr": 0.01453935209482768
},
"harness|arc:challenge|25": {
"acc": 0.6885665529010239,
"acc_stderr": 0.013532472099850947,
"acc_norm": 0.7218430034129693,
"acc_norm_stderr": 0.013094469919538805
},
"harness|hellaswag|10": {
"acc": 0.68442541326429,
"acc_stderr": 0.004637944965914612,
"acc_norm": 0.874726150169289,
"acc_norm_stderr": 0.003303526413123495
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.02825420034443865,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.02825420034443865
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.036146654241808254,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.036146654241808254
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.049512182523962625,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.049512182523962625
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6,
"acc_stderr": 0.03202563076101735,
"acc_norm": 0.6,
"acc_norm_stderr": 0.03202563076101735
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.025525034382474887,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.025525034382474887
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723292,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723292
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.03008862949021749,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.03008862949021749
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8808290155440415,
"acc_stderr": 0.023381935348121427,
"acc_norm": 0.8808290155440415,
"acc_norm_stderr": 0.023381935348121427
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.029318203645206858,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.029318203645206858
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.680672268907563,
"acc_stderr": 0.030283995525884396,
"acc_norm": 0.680672268907563,
"acc_norm_stderr": 0.030283995525884396
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.026460569561240644,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.026460569561240644
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621112,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621112
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313728,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313728
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406974,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406974
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.013740797258579825,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.013740797258579825
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.024257901705323385,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.024257901705323385
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.40670391061452515,
"acc_stderr": 0.01642881191589886,
"acc_norm": 0.40670391061452515,
"acc_norm_stderr": 0.01642881191589886
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.729903536977492,
"acc_stderr": 0.02521804037341063,
"acc_norm": 0.729903536977492,
"acc_norm_stderr": 0.02521804037341063
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.0242885336377261,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.0242885336377261
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4576271186440678,
"acc_stderr": 0.012724296550980188,
"acc_norm": 0.4576271186440678,
"acc_norm_stderr": 0.012724296550980188
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093092,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093092
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291293,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291293
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.026508590656233278,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.026508590656233278
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.847953216374269,
"acc_stderr": 0.027539122889061456,
"acc_norm": 0.847953216374269,
"acc_norm_stderr": 0.027539122889061456
},
"harness|truthfulqa:mc|0": {
"mc1": 0.554467564259486,
"mc1_stderr": 0.017399335280140343,
"mc2": 0.7230180462238235,
"mc2_stderr": 0.01453935209482768
},
"harness|winogrande|5": {
"acc": 0.8271507498026835,
"acc_stderr": 0.010626964529971868
},
"harness|gsm8k|5": {
"acc": 0.621683093252464,
"acc_stderr": 0.013358407831777112
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_FelixChao__WestSeverus-10.7B | [
"region:us"
] | 2024-02-01T17:07:53+00:00 | {"pretty_name": "Evaluation run of FelixChao/WestSeverus-10.7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [FelixChao/WestSeverus-10.7B](https://huggingface.co/FelixChao/WestSeverus-10.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FelixChao__WestSeverus-10.7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:05:37.081553](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__WestSeverus-10.7B/blob/main/results_2024-02-01T17-05-37.081553.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6541756570563236,\n \"acc_stderr\": 0.032145025906151356,\n \"acc_norm\": 0.6555840410982634,\n \"acc_norm_stderr\": 0.0327981150369556,\n \"mc1\": 0.554467564259486,\n \"mc1_stderr\": 0.017399335280140343,\n \"mc2\": 0.7230180462238235,\n \"mc2_stderr\": 0.01453935209482768\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6885665529010239,\n \"acc_stderr\": 0.013532472099850947,\n \"acc_norm\": 0.7218430034129693,\n \"acc_norm_stderr\": 0.013094469919538805\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.68442541326429,\n \"acc_stderr\": 0.004637944965914612,\n \"acc_norm\": 0.874726150169289,\n \"acc_norm_stderr\": 0.003303526413123495\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443865,\n \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443865\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.036146654241808254,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.036146654241808254\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03202563076101735,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03202563076101735\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474887,\n \"acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474887\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723292,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723292\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.023381935348121427,\n \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.023381935348121427\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206858,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206858\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240644,\n \"acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240644\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621112,\n \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621112\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313728,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313728\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406974,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406974\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n \"acc_stderr\": 0.013740797258579825,\n \"acc_norm\": 0.8199233716475096,\n \"acc_norm_stderr\": 0.013740797258579825\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.024257901705323385,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.024257901705323385\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40670391061452515,\n \"acc_stderr\": 0.01642881191589886,\n \"acc_norm\": 0.40670391061452515,\n \"acc_norm_stderr\": 0.01642881191589886\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.729903536977492,\n \"acc_stderr\": 0.02521804037341063,\n \"acc_norm\": 0.729903536977492,\n \"acc_norm_stderr\": 0.02521804037341063\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.0242885336377261,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.0242885336377261\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4576271186440678,\n \"acc_stderr\": 0.012724296550980188,\n \"acc_norm\": 0.4576271186440678,\n \"acc_norm_stderr\": 0.012724296550980188\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291293,\n \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291293\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.026508590656233278,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.026508590656233278\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.027539122889061456,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.027539122889061456\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.554467564259486,\n \"mc1_stderr\": 0.017399335280140343,\n \"mc2\": 0.7230180462238235,\n \"mc2_stderr\": 0.01453935209482768\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8271507498026835,\n \"acc_stderr\": 0.010626964529971868\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.621683093252464,\n \"acc_stderr\": 0.013358407831777112\n }\n}\n```", "repo_url": "https://huggingface.co/FelixChao/WestSeverus-10.7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-05-37.081553.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["**/details_harness|winogrande|5_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-05-37.081553.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_05_37.081553", "path": ["results_2024-02-01T17-05-37.081553.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-05-37.081553.parquet"]}]}]} | 2024-02-01T17:08:20+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of FelixChao/WestSeverus-10.7B
Dataset automatically created during the evaluation run of model FelixChao/WestSeverus-10.7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:05:37.081553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of FelixChao/WestSeverus-10.7B\n\n\n\nDataset automatically created during the evaluation run of model FelixChao/WestSeverus-10.7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:05:37.081553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of FelixChao/WestSeverus-10.7B\n\n\n\nDataset automatically created during the evaluation run of model FelixChao/WestSeverus-10.7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:05:37.081553(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
8b0b421a8512b8f3e1ac334fe37662f908c3299e |
# Dataset Card for Evaluation run of NovoCode/Phi-2-DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [NovoCode/Phi-2-DPO](https://huggingface.co/NovoCode/Phi-2-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NovoCode__Phi-2-DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:08:08.454430](https://huggingface.co/datasets/open-llm-leaderboard/details_NovoCode__Phi-2-DPO/blob/main/results_2024-02-01T17-08-08.454430.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5795587539871195,
"acc_stderr": 0.033777624922631505,
"acc_norm": 0.5809349484156788,
"acc_norm_stderr": 0.034467505401666106,
"mc1": 0.30599755201958384,
"mc1_stderr": 0.01613222972815504,
"mc2": 0.4445615233459152,
"mc2_stderr": 0.015100517041010023
},
"harness|arc:challenge|25": {
"acc": 0.5784982935153583,
"acc_stderr": 0.014430197069326023,
"acc_norm": 0.6075085324232082,
"acc_norm_stderr": 0.014269634635670714
},
"harness|hellaswag|10": {
"acc": 0.5621390161322446,
"acc_stderr": 0.004951097802775951,
"acc_norm": 0.7503485361481776,
"acc_norm_stderr": 0.004319267432460672
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.42962962962962964,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.42962962962962964,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5855263157894737,
"acc_stderr": 0.040089737857792046,
"acc_norm": 0.5855263157894737,
"acc_norm_stderr": 0.040089737857792046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5886792452830188,
"acc_stderr": 0.03028500925900979,
"acc_norm": 0.5886792452830188,
"acc_norm_stderr": 0.03028500925900979
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6736111111111112,
"acc_stderr": 0.03921067198982266,
"acc_norm": 0.6736111111111112,
"acc_norm_stderr": 0.03921067198982266
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5953757225433526,
"acc_stderr": 0.03742461193887248,
"acc_norm": 0.5953757225433526,
"acc_norm_stderr": 0.03742461193887248
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.047551296160629475,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.047551296160629475
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.38596491228070173,
"acc_stderr": 0.04579639422070434,
"acc_norm": 0.38596491228070173,
"acc_norm_stderr": 0.04579639422070434
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.025487187147859372,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025487187147859372
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.04343525428949098,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.04343525428949098
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7,
"acc_stderr": 0.026069362295335134,
"acc_norm": 0.7,
"acc_norm_stderr": 0.026069362295335134
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.035145285621750094,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.035145285621750094
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6424242424242425,
"acc_stderr": 0.03742597043806586,
"acc_norm": 0.6424242424242425,
"acc_norm_stderr": 0.03742597043806586
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7323232323232324,
"acc_stderr": 0.03154449888270285,
"acc_norm": 0.7323232323232324,
"acc_norm_stderr": 0.03154449888270285
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8082901554404145,
"acc_stderr": 0.028408953626245282,
"acc_norm": 0.8082901554404145,
"acc_norm_stderr": 0.028408953626245282
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5794871794871795,
"acc_stderr": 0.025028610276710855,
"acc_norm": 0.5794871794871795,
"acc_norm_stderr": 0.025028610276710855
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547307,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547307
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.03163145807552378,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.03163145807552378
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.0395802723112157,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.0395802723112157
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7944954128440367,
"acc_stderr": 0.017324352325016015,
"acc_norm": 0.7944954128440367,
"acc_norm_stderr": 0.017324352325016015
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6568627450980392,
"acc_stderr": 0.033321399446680854,
"acc_norm": 0.6568627450980392,
"acc_norm_stderr": 0.033321399446680854
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7341772151898734,
"acc_stderr": 0.028756799629658342,
"acc_norm": 0.7341772151898734,
"acc_norm_stderr": 0.028756799629658342
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6502242152466368,
"acc_stderr": 0.03200736719484503,
"acc_norm": 0.6502242152466368,
"acc_norm_stderr": 0.03200736719484503
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514511,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514511
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5267857142857143,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.5267857142857143,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7087378640776699,
"acc_stderr": 0.044986763205729224,
"acc_norm": 0.7087378640776699,
"acc_norm_stderr": 0.044986763205729224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.811965811965812,
"acc_stderr": 0.025598193686652268,
"acc_norm": 0.811965811965812,
"acc_norm_stderr": 0.025598193686652268
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6909323116219668,
"acc_stderr": 0.016524988919702204,
"acc_norm": 0.6909323116219668,
"acc_norm_stderr": 0.016524988919702204
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.684971098265896,
"acc_stderr": 0.025009313790069716,
"acc_norm": 0.684971098265896,
"acc_norm_stderr": 0.025009313790069716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2782122905027933,
"acc_stderr": 0.014987325439963546,
"acc_norm": 0.2782122905027933,
"acc_norm_stderr": 0.014987325439963546
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6274509803921569,
"acc_stderr": 0.027684181883302895,
"acc_norm": 0.6274509803921569,
"acc_norm_stderr": 0.027684181883302895
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6270096463022508,
"acc_stderr": 0.027466610213140105,
"acc_norm": 0.6270096463022508,
"acc_norm_stderr": 0.027466610213140105
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6049382716049383,
"acc_stderr": 0.02720111766692565,
"acc_norm": 0.6049382716049383,
"acc_norm_stderr": 0.02720111766692565
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.029462189233370604,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.029462189233370604
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42242503259452413,
"acc_stderr": 0.012615600475734921,
"acc_norm": 0.42242503259452413,
"acc_norm_stderr": 0.012615600475734921
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5036764705882353,
"acc_stderr": 0.030372015885428195,
"acc_norm": 0.5036764705882353,
"acc_norm_stderr": 0.030372015885428195
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5620915032679739,
"acc_stderr": 0.020071257886886525,
"acc_norm": 0.5620915032679739,
"acc_norm_stderr": 0.020071257886886525
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675602,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675602
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7910447761194029,
"acc_stderr": 0.028748298931728655,
"acc_norm": 0.7910447761194029,
"acc_norm_stderr": 0.028748298931728655
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-virology|5": {
"acc": 0.46987951807228917,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.46987951807228917,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.695906432748538,
"acc_stderr": 0.03528211258245231,
"acc_norm": 0.695906432748538,
"acc_norm_stderr": 0.03528211258245231
},
"harness|truthfulqa:mc|0": {
"mc1": 0.30599755201958384,
"mc1_stderr": 0.01613222972815504,
"mc2": 0.4445615233459152,
"mc2_stderr": 0.015100517041010023
},
"harness|winogrande|5": {
"acc": 0.7363851617995264,
"acc_stderr": 0.012382849299658466
},
"harness|gsm8k|5": {
"acc": 0.558756633813495,
"acc_stderr": 0.013677059478592645
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_NovoCode__Phi-2-DPO | [
"region:us"
] | 2024-02-01T17:09:53+00:00 | {"pretty_name": "Evaluation run of NovoCode/Phi-2-DPO", "dataset_summary": "Dataset automatically created during the evaluation run of model [NovoCode/Phi-2-DPO](https://huggingface.co/NovoCode/Phi-2-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NovoCode__Phi-2-DPO\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:08:08.454430](https://huggingface.co/datasets/open-llm-leaderboard/details_NovoCode__Phi-2-DPO/blob/main/results_2024-02-01T17-08-08.454430.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5795587539871195,\n \"acc_stderr\": 0.033777624922631505,\n \"acc_norm\": 0.5809349484156788,\n \"acc_norm_stderr\": 0.034467505401666106,\n \"mc1\": 0.30599755201958384,\n \"mc1_stderr\": 0.01613222972815504,\n \"mc2\": 0.4445615233459152,\n \"mc2_stderr\": 0.015100517041010023\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5784982935153583,\n \"acc_stderr\": 0.014430197069326023,\n \"acc_norm\": 0.6075085324232082,\n \"acc_norm_stderr\": 0.014269634635670714\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5621390161322446,\n \"acc_stderr\": 0.004951097802775951,\n \"acc_norm\": 0.7503485361481776,\n \"acc_norm_stderr\": 0.004319267432460672\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.42962962962962964,\n \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.42962962962962964,\n \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5855263157894737,\n \"acc_stderr\": 0.040089737857792046,\n \"acc_norm\": 0.5855263157894737,\n \"acc_norm_stderr\": 0.040089737857792046\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5886792452830188,\n \"acc_stderr\": 0.03028500925900979,\n \"acc_norm\": 0.5886792452830188,\n \"acc_norm_stderr\": 0.03028500925900979\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5953757225433526,\n \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.5953757225433526,\n \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.047551296160629475,\n \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.047551296160629475\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.38596491228070173,\n \"acc_stderr\": 0.04579639422070434,\n \"acc_norm\": 0.38596491228070173,\n \"acc_norm_stderr\": 0.04579639422070434\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.025487187147859372,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.025487187147859372\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.04343525428949098,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.04343525428949098\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.026069362295335134,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.026069362295335134\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.035145285621750094,\n \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.035145285621750094\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6424242424242425,\n \"acc_stderr\": 0.03742597043806586,\n \"acc_norm\": 0.6424242424242425,\n \"acc_norm_stderr\": 0.03742597043806586\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270285,\n \"acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270285\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8082901554404145,\n \"acc_stderr\": 0.028408953626245282,\n \"acc_norm\": 0.8082901554404145,\n \"acc_norm_stderr\": 0.028408953626245282\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5794871794871795,\n \"acc_stderr\": 0.025028610276710855,\n \"acc_norm\": 0.5794871794871795,\n \"acc_norm_stderr\": 0.025028610276710855\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547307,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547307\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.03163145807552378,\n \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.03163145807552378\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7944954128440367,\n \"acc_stderr\": 0.017324352325016015,\n \"acc_norm\": 0.7944954128440367,\n \"acc_norm_stderr\": 0.017324352325016015\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.46296296296296297,\n \"acc_stderr\": 0.03400603625538272,\n \"acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03400603625538272\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6568627450980392,\n \"acc_stderr\": 0.033321399446680854,\n \"acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.033321399446680854\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7341772151898734,\n \"acc_stderr\": 0.028756799629658342,\n \"acc_norm\": 0.7341772151898734,\n \"acc_norm_stderr\": 0.028756799629658342\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.71900826446281,\n \"acc_stderr\": 0.04103203830514511,\n \"acc_norm\": 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514511\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5267857142857143,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.5267857142857143,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.044986763205729224,\n \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.044986763205729224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.811965811965812,\n \"acc_stderr\": 0.025598193686652268,\n \"acc_norm\": 0.811965811965812,\n \"acc_norm_stderr\": 0.025598193686652268\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6909323116219668,\n \"acc_stderr\": 0.016524988919702204,\n \"acc_norm\": 0.6909323116219668,\n \"acc_norm_stderr\": 0.016524988919702204\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069716,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069716\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2782122905027933,\n \"acc_stderr\": 0.014987325439963546,\n \"acc_norm\": 0.2782122905027933,\n \"acc_norm_stderr\": 0.014987325439963546\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6274509803921569,\n \"acc_stderr\": 0.027684181883302895,\n \"acc_norm\": 0.6274509803921569,\n \"acc_norm_stderr\": 0.027684181883302895\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6270096463022508,\n \"acc_stderr\": 0.027466610213140105,\n \"acc_norm\": 0.6270096463022508,\n \"acc_norm_stderr\": 0.027466610213140105\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6049382716049383,\n \"acc_stderr\": 0.02720111766692565,\n \"acc_norm\": 0.6049382716049383,\n \"acc_norm_stderr\": 0.02720111766692565\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4219858156028369,\n \"acc_stderr\": 0.029462189233370604,\n \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.029462189233370604\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42242503259452413,\n \"acc_stderr\": 0.012615600475734921,\n \"acc_norm\": 0.42242503259452413,\n \"acc_norm_stderr\": 0.012615600475734921\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5036764705882353,\n \"acc_stderr\": 0.030372015885428195,\n \"acc_norm\": 0.5036764705882353,\n \"acc_norm_stderr\": 0.030372015885428195\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5620915032679739,\n \"acc_stderr\": 0.020071257886886525,\n \"acc_norm\": 0.5620915032679739,\n \"acc_norm_stderr\": 0.020071257886886525\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675602,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675602\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7910447761194029,\n \"acc_stderr\": 0.028748298931728655,\n \"acc_norm\": 0.7910447761194029,\n \"acc_norm_stderr\": 0.028748298931728655\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.46987951807228917,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.46987951807228917,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.695906432748538,\n \"acc_stderr\": 0.03528211258245231,\n \"acc_norm\": 0.695906432748538,\n \"acc_norm_stderr\": 0.03528211258245231\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.30599755201958384,\n \"mc1_stderr\": 0.01613222972815504,\n \"mc2\": 0.4445615233459152,\n \"mc2_stderr\": 0.015100517041010023\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7363851617995264,\n \"acc_stderr\": 0.012382849299658466\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.558756633813495,\n \"acc_stderr\": 0.013677059478592645\n }\n}\n```", "repo_url": "https://huggingface.co/NovoCode/Phi-2-DPO", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-08-08.454430.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["**/details_harness|winogrande|5_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-08-08.454430.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_08_08.454430", "path": ["results_2024-02-01T17-08-08.454430.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-08-08.454430.parquet"]}]}]} | 2024-02-01T17:10:18+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NovoCode/Phi-2-DPO
Dataset automatically created during the evaluation run of model NovoCode/Phi-2-DPO on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:08:08.454430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of NovoCode/Phi-2-DPO\n\n\n\nDataset automatically created during the evaluation run of model NovoCode/Phi-2-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:08:08.454430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NovoCode/Phi-2-DPO\n\n\n\nDataset automatically created during the evaluation run of model NovoCode/Phi-2-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:08:08.454430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
55d6863cf8df0a8794bf3d902e2694c46b00de45 |
# Dataset Card for Evaluation run of yanolja/KoSOLAR-10.7B-v0.2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yanolja__KoSOLAR-10.7B-v0.2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:12:29.578851](https://huggingface.co/datasets/open-llm-leaderboard/details_yanolja__KoSOLAR-10.7B-v0.2/blob/main/results_2024-02-01T17-12-29.578851.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6465773677032779,
"acc_stderr": 0.0317513460109556,
"acc_norm": 0.6508843032597929,
"acc_norm_stderr": 0.03238458854956254,
"mc1": 0.3268053855569155,
"mc1_stderr": 0.016419874731135032,
"mc2": 0.4794002097808169,
"mc2_stderr": 0.015071913407180176
},
"harness|arc:challenge|25": {
"acc": 0.5597269624573379,
"acc_stderr": 0.014506769524804241,
"acc_norm": 0.613481228668942,
"acc_norm_stderr": 0.014230084761910474
},
"harness|hellaswag|10": {
"acc": 0.634833698466441,
"acc_stderr": 0.0048049276087731236,
"acc_norm": 0.8263294164509062,
"acc_norm_stderr": 0.0037805175193024905
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7302631578947368,
"acc_stderr": 0.03611780560284898,
"acc_norm": 0.7302631578947368,
"acc_norm_stderr": 0.03611780560284898
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6127167630057804,
"acc_stderr": 0.03714325906302064,
"acc_norm": 0.6127167630057804,
"acc_norm_stderr": 0.03714325906302064
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.047240073523838876,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.047240073523838876
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909281,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909281
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4576719576719577,
"acc_stderr": 0.02565886886205834,
"acc_norm": 0.4576719576719577,
"acc_norm_stderr": 0.02565886886205834
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.02366421667164251,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.02366421667164251
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.03517603540361008,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.03517603540361008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.03158415324047709,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.03158415324047709
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8282828282828283,
"acc_stderr": 0.02686971618742991,
"acc_norm": 0.8282828282828283,
"acc_norm_stderr": 0.02686971618742991
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6230769230769231,
"acc_stderr": 0.024570975364225995,
"acc_norm": 0.6230769230769231,
"acc_norm_stderr": 0.024570975364225995
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.02866120111652458,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.02866120111652458
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.030778057422931673,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.030778057422931673
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.01584825580650155,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.01584825580650155
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6018518518518519,
"acc_stderr": 0.033384734032074016,
"acc_norm": 0.6018518518518519,
"acc_norm_stderr": 0.033384734032074016
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.025524722324553346,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.025524722324553346
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8438818565400844,
"acc_stderr": 0.023627159460318674,
"acc_norm": 0.8438818565400844,
"acc_norm_stderr": 0.023627159460318674
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.726457399103139,
"acc_stderr": 0.029918586707798824,
"acc_norm": 0.726457399103139,
"acc_norm_stderr": 0.029918586707798824
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097654,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097654
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615623,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615623
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8160919540229885,
"acc_stderr": 0.013853724170922526,
"acc_norm": 0.8160919540229885,
"acc_norm_stderr": 0.013853724170922526
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7254335260115607,
"acc_stderr": 0.02402774515526502,
"acc_norm": 0.7254335260115607,
"acc_norm_stderr": 0.02402774515526502
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2569832402234637,
"acc_stderr": 0.014614465821966344,
"acc_norm": 0.2569832402234637,
"acc_norm_stderr": 0.014614465821966344
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.024288619466046095,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.024288619466046095
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.02575586592263295,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.02575586592263295
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7469135802469136,
"acc_stderr": 0.024191808600713002,
"acc_norm": 0.7469135802469136,
"acc_norm_stderr": 0.024191808600713002
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.48239895697522817,
"acc_stderr": 0.012762321298823641,
"acc_norm": 0.48239895697522817,
"acc_norm_stderr": 0.012762321298823641
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7132352941176471,
"acc_stderr": 0.027472274473233818,
"acc_norm": 0.7132352941176471,
"acc_norm_stderr": 0.027472274473233818
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.018798086284886887,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.018798086284886887
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.763265306122449,
"acc_stderr": 0.027212835884073153,
"acc_norm": 0.763265306122449,
"acc_norm_stderr": 0.027212835884073153
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.02519692987482708,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.02519692987482708
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.025643239997624294,
"acc_norm": 0.93,
"acc_norm_stderr": 0.025643239997624294
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7953216374269005,
"acc_stderr": 0.030944459778533207,
"acc_norm": 0.7953216374269005,
"acc_norm_stderr": 0.030944459778533207
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3268053855569155,
"mc1_stderr": 0.016419874731135032,
"mc2": 0.4794002097808169,
"mc2_stderr": 0.015071913407180176
},
"harness|winogrande|5": {
"acc": 0.8074191002367798,
"acc_stderr": 0.01108253884749191
},
"harness|gsm8k|5": {
"acc": 0.47687642153146326,
"acc_stderr": 0.013757748544245323
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_yanolja__KoSOLAR-10.7B-v0.2 | [
"region:us"
] | 2024-02-01T17:14:45+00:00 | {"pretty_name": "Evaluation run of yanolja/KoSOLAR-10.7B-v0.2", "dataset_summary": "Dataset automatically created during the evaluation run of model [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yanolja__KoSOLAR-10.7B-v0.2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:12:29.578851](https://huggingface.co/datasets/open-llm-leaderboard/details_yanolja__KoSOLAR-10.7B-v0.2/blob/main/results_2024-02-01T17-12-29.578851.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6465773677032779,\n \"acc_stderr\": 0.0317513460109556,\n \"acc_norm\": 0.6508843032597929,\n \"acc_norm_stderr\": 0.03238458854956254,\n \"mc1\": 0.3268053855569155,\n \"mc1_stderr\": 0.016419874731135032,\n \"mc2\": 0.4794002097808169,\n \"mc2_stderr\": 0.015071913407180176\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5597269624573379,\n \"acc_stderr\": 0.014506769524804241,\n \"acc_norm\": 0.613481228668942,\n \"acc_norm_stderr\": 0.014230084761910474\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.634833698466441,\n \"acc_stderr\": 0.0048049276087731236,\n \"acc_norm\": 0.8263294164509062,\n \"acc_norm_stderr\": 0.0037805175193024905\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7302631578947368,\n \"acc_stderr\": 0.03611780560284898,\n \"acc_norm\": 0.7302631578947368,\n \"acc_norm_stderr\": 0.03611780560284898\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6127167630057804,\n \"acc_stderr\": 0.03714325906302064,\n \"acc_norm\": 0.6127167630057804,\n \"acc_norm_stderr\": 0.03714325906302064\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.047240073523838876,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.047240073523838876\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909281,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909281\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4576719576719577,\n \"acc_stderr\": 0.02565886886205834,\n \"acc_norm\": 0.4576719576719577,\n \"acc_norm_stderr\": 0.02565886886205834\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.02366421667164251,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.02366421667164251\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.03158415324047709,\n \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.03158415324047709\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8282828282828283,\n \"acc_stderr\": 0.02686971618742991,\n \"acc_norm\": 0.8282828282828283,\n \"acc_norm_stderr\": 0.02686971618742991\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6230769230769231,\n \"acc_stderr\": 0.024570975364225995,\n \"acc_norm\": 0.6230769230769231,\n \"acc_norm_stderr\": 0.024570975364225995\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652458,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652458\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8366972477064221,\n \"acc_stderr\": 0.01584825580650155,\n \"acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.01584825580650155\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6018518518518519,\n \"acc_stderr\": 0.033384734032074016,\n \"acc_norm\": 0.6018518518518519,\n \"acc_norm_stderr\": 0.033384734032074016\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.025524722324553346,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.025524722324553346\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8438818565400844,\n \"acc_stderr\": 0.023627159460318674,\n \"acc_norm\": 0.8438818565400844,\n \"acc_norm_stderr\": 0.023627159460318674\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.726457399103139,\n \"acc_stderr\": 0.029918586707798824,\n \"acc_norm\": 0.726457399103139,\n \"acc_norm_stderr\": 0.029918586707798824\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097654,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097654\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8160919540229885,\n \"acc_stderr\": 0.013853724170922526,\n \"acc_norm\": 0.8160919540229885,\n \"acc_norm_stderr\": 0.013853724170922526\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7254335260115607,\n \"acc_stderr\": 0.02402774515526502,\n \"acc_norm\": 0.7254335260115607,\n \"acc_norm_stderr\": 0.02402774515526502\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2569832402234637,\n \"acc_stderr\": 0.014614465821966344,\n \"acc_norm\": 0.2569832402234637,\n \"acc_norm_stderr\": 0.014614465821966344\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.024288619466046095,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.024288619466046095\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.02575586592263295,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.02575586592263295\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600713002,\n \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600713002\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.48239895697522817,\n \"acc_stderr\": 0.012762321298823641,\n \"acc_norm\": 0.48239895697522817,\n \"acc_norm_stderr\": 0.012762321298823641\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7132352941176471,\n \"acc_stderr\": 0.027472274473233818,\n \"acc_norm\": 0.7132352941176471,\n \"acc_norm_stderr\": 0.027472274473233818\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.684640522875817,\n \"acc_stderr\": 0.018798086284886887,\n \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.018798086284886887\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.763265306122449,\n \"acc_stderr\": 0.027212835884073153,\n \"acc_norm\": 0.763265306122449,\n \"acc_norm_stderr\": 0.027212835884073153\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.02519692987482708,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.02519692987482708\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.93,\n \"acc_stderr\": 0.025643239997624294,\n \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.025643239997624294\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7953216374269005,\n \"acc_stderr\": 0.030944459778533207,\n \"acc_norm\": 0.7953216374269005,\n \"acc_norm_stderr\": 0.030944459778533207\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3268053855569155,\n \"mc1_stderr\": 0.016419874731135032,\n \"mc2\": 0.4794002097808169,\n \"mc2_stderr\": 0.015071913407180176\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8074191002367798,\n \"acc_stderr\": 0.01108253884749191\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.47687642153146326,\n \"acc_stderr\": 0.013757748544245323\n }\n}\n```", "repo_url": "https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-12-29.578851.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["**/details_harness|winogrande|5_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-12-29.578851.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_12_29.578851", "path": ["results_2024-02-01T17-12-29.578851.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-12-29.578851.parquet"]}]}]} | 2024-02-01T17:15:15+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of yanolja/KoSOLAR-10.7B-v0.2
Dataset automatically created during the evaluation run of model yanolja/KoSOLAR-10.7B-v0.2 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:12:29.578851(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of yanolja/KoSOLAR-10.7B-v0.2\n\n\n\nDataset automatically created during the evaluation run of model yanolja/KoSOLAR-10.7B-v0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:12:29.578851(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yanolja/KoSOLAR-10.7B-v0.2\n\n\n\nDataset automatically created during the evaluation run of model yanolja/KoSOLAR-10.7B-v0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:12:29.578851(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
9ba7d8e31ac3a9477ea461b6726a1603c1f72725 |
# Dataset Card for Evaluation run of NovoCode/Metabird-7b-DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [NovoCode/Metabird-7b-DPO](https://huggingface.co/NovoCode/Metabird-7b-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NovoCode__Metabird-7b-DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:13:18.108149](https://huggingface.co/datasets/open-llm-leaderboard/details_NovoCode__Metabird-7b-DPO/blob/main/results_2024-02-01T17-13-18.108149.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6391904204460193,
"acc_stderr": 0.031966413772480606,
"acc_norm": 0.6485302281179317,
"acc_norm_stderr": 0.0326892053481526,
"mc1": 0.42717258261933905,
"mc1_stderr": 0.01731683441096393,
"mc2": 0.60296165179436,
"mc2_stderr": 0.015612165700803435
},
"harness|arc:challenge|25": {
"acc": 0.6305460750853242,
"acc_stderr": 0.014104578366491892,
"acc_norm": 0.659556313993174,
"acc_norm_stderr": 0.013847460518892976
},
"harness|hellaswag|10": {
"acc": 0.6872137024497113,
"acc_stderr": 0.004626805906522217,
"acc_norm": 0.8628759211312488,
"acc_norm_stderr": 0.0034327529819187957
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7171052631578947,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.7171052631578947,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3627450980392157,
"acc_stderr": 0.04784060704105654,
"acc_norm": 0.3627450980392157,
"acc_norm_stderr": 0.04784060704105654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.025446365634406772,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.025446365634406772
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181012,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181012
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328974,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328974
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7100840336134454,
"acc_stderr": 0.029472485833136084,
"acc_norm": 0.7100840336134454,
"acc_norm_stderr": 0.029472485833136084
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.027325470966716312,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.027325470966716312
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8312236286919831,
"acc_stderr": 0.024381406832586237,
"acc_norm": 0.8312236286919831,
"acc_norm_stderr": 0.024381406832586237
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.02158649400128138,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.02158649400128138
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8339719029374202,
"acc_stderr": 0.013306478243066302,
"acc_norm": 0.8339719029374202,
"acc_norm_stderr": 0.013306478243066302
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468358,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468358
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.31843575418994413,
"acc_stderr": 0.015581008080360276,
"acc_norm": 0.31843575418994413,
"acc_norm_stderr": 0.015581008080360276
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7418300653594772,
"acc_stderr": 0.025058503316958143,
"acc_norm": 0.7418300653594772,
"acc_norm_stderr": 0.025058503316958143
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.02447722285613511,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.02447722285613511
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4745762711864407,
"acc_stderr": 0.012753716929101004,
"acc_norm": 0.4745762711864407,
"acc_norm_stderr": 0.012753716929101004
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.028418208619406755,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.028418208619406755
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6879084967320261,
"acc_stderr": 0.018745011201277657,
"acc_norm": 0.6879084967320261,
"acc_norm_stderr": 0.018745011201277657
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.027833023871399666,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.027833023871399666
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.42717258261933905,
"mc1_stderr": 0.01731683441096393,
"mc2": 0.60296165179436,
"mc2_stderr": 0.015612165700803435
},
"harness|winogrande|5": {
"acc": 0.813733228097869,
"acc_stderr": 0.010941877955676207
},
"harness|gsm8k|5": {
"acc": 0.11827141774071266,
"acc_stderr": 0.008895075852434953
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_NovoCode__Metabird-7b-DPO | [
"region:us"
] | 2024-02-01T17:15:35+00:00 | {"pretty_name": "Evaluation run of NovoCode/Metabird-7b-DPO", "dataset_summary": "Dataset automatically created during the evaluation run of model [NovoCode/Metabird-7b-DPO](https://huggingface.co/NovoCode/Metabird-7b-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NovoCode__Metabird-7b-DPO\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:13:18.108149](https://huggingface.co/datasets/open-llm-leaderboard/details_NovoCode__Metabird-7b-DPO/blob/main/results_2024-02-01T17-13-18.108149.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6391904204460193,\n \"acc_stderr\": 0.031966413772480606,\n \"acc_norm\": 0.6485302281179317,\n \"acc_norm_stderr\": 0.0326892053481526,\n \"mc1\": 0.42717258261933905,\n \"mc1_stderr\": 0.01731683441096393,\n \"mc2\": 0.60296165179436,\n \"mc2_stderr\": 0.015612165700803435\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6305460750853242,\n \"acc_stderr\": 0.014104578366491892,\n \"acc_norm\": 0.659556313993174,\n \"acc_norm_stderr\": 0.013847460518892976\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6872137024497113,\n \"acc_stderr\": 0.004626805906522217,\n \"acc_norm\": 0.8628759211312488,\n \"acc_norm_stderr\": 0.0034327529819187957\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105654,\n \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42328042328042326,\n \"acc_stderr\": 0.025446365634406772,\n \"acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.025446365634406772\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181012,\n \"acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181012\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328974,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328974\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7100840336134454,\n \"acc_stderr\": 0.029472485833136084,\n \"acc_norm\": 0.7100840336134454,\n \"acc_norm_stderr\": 0.029472485833136084\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.027325470966716312,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.027325470966716312\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8312236286919831,\n \"acc_stderr\": 0.024381406832586237,\n \"acc_norm\": 0.8312236286919831,\n \"acc_norm_stderr\": 0.024381406832586237\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.02158649400128138,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.02158649400128138\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8339719029374202,\n \"acc_stderr\": 0.013306478243066302,\n \"acc_norm\": 0.8339719029374202,\n \"acc_norm_stderr\": 0.013306478243066302\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468358,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468358\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.31843575418994413,\n \"acc_stderr\": 0.015581008080360276,\n \"acc_norm\": 0.31843575418994413,\n \"acc_norm_stderr\": 0.015581008080360276\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7418300653594772,\n \"acc_stderr\": 0.025058503316958143,\n \"acc_norm\": 0.7418300653594772,\n \"acc_norm_stderr\": 0.025058503316958143\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.02447722285613511,\n \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.02447722285613511\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4745762711864407,\n \"acc_stderr\": 0.012753716929101004,\n \"acc_norm\": 0.4745762711864407,\n \"acc_norm_stderr\": 0.012753716929101004\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406755,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406755\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6879084967320261,\n \"acc_stderr\": 0.018745011201277657,\n \"acc_norm\": 0.6879084967320261,\n \"acc_norm_stderr\": 0.018745011201277657\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.027833023871399666,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.027833023871399666\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.42717258261933905,\n \"mc1_stderr\": 0.01731683441096393,\n \"mc2\": 0.60296165179436,\n \"mc2_stderr\": 0.015612165700803435\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.813733228097869,\n \"acc_stderr\": 0.010941877955676207\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11827141774071266,\n \"acc_stderr\": 0.008895075852434953\n }\n}\n```", "repo_url": "https://huggingface.co/NovoCode/Metabird-7b-DPO", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-13-18.108149.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["**/details_harness|winogrande|5_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-13-18.108149.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_13_18.108149", "path": ["results_2024-02-01T17-13-18.108149.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-13-18.108149.parquet"]}]}]} | 2024-02-01T17:16:01+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NovoCode/Metabird-7b-DPO
Dataset automatically created during the evaluation run of model NovoCode/Metabird-7b-DPO on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:13:18.108149(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of NovoCode/Metabird-7b-DPO\n\n\n\nDataset automatically created during the evaluation run of model NovoCode/Metabird-7b-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:13:18.108149(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NovoCode/Metabird-7b-DPO\n\n\n\nDataset automatically created during the evaluation run of model NovoCode/Metabird-7b-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:13:18.108149(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
0d5941e591e255d4025bf8ee2a62dffd9aad6852 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | tuantmdev/mytestdataset | [
"region:us"
] | 2024-02-01T17:31:38+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 270600, "num_examples": 1346}], "download_size": 151786, "dataset_size": 270600}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-01T17:57:32+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
71b0e9709f4970c51bfee69ad6ae49aa19149f37 |
# Dataset Card for Evaluation run of mlabonne/NeuralDarewin-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mlabonne/NeuralDarewin-7B](https://huggingface.co/mlabonne/NeuralDarewin-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mlabonne__NeuralDarewin-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:48:56.790250](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralDarewin-7B/blob/main/results_2024-02-01T17-48-56.790250.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6520779999242557,
"acc_stderr": 0.03214755914196501,
"acc_norm": 0.6530571027875921,
"acc_norm_stderr": 0.03279768840920175,
"mc1": 0.4675642594859241,
"mc1_stderr": 0.017466632149577613,
"mc2": 0.6291675924515658,
"mc2_stderr": 0.015571699922487066
},
"harness|arc:challenge|25": {
"acc": 0.6629692832764505,
"acc_stderr": 0.013813476652902274,
"acc_norm": 0.7013651877133106,
"acc_norm_stderr": 0.01337407861506874
},
"harness|hellaswag|10": {
"acc": 0.68442541326429,
"acc_stderr": 0.004637944965914613,
"acc_norm": 0.8639713204540929,
"acc_norm_stderr": 0.0034211839093201612
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368881,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368881
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055263,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055263
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.02371088850197057,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.02371088850197057
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.02931820364520686,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.02931820364520686
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.02995382389188704,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.02995382389188704
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8550458715596331,
"acc_stderr": 0.015094215699700476,
"acc_norm": 0.8550458715596331,
"acc_norm_stderr": 0.015094215699700476
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.03395322726375797,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.03395322726375797
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.02675640153807897,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.02675640153807897
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8185654008438819,
"acc_stderr": 0.025085961144579647,
"acc_norm": 0.8185654008438819,
"acc_norm_stderr": 0.025085961144579647
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281376,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281376
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8403575989782887,
"acc_stderr": 0.013097934513263005,
"acc_norm": 0.8403575989782887,
"acc_norm_stderr": 0.013097934513263005
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.023445826276545543,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.023445826276545543
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39664804469273746,
"acc_stderr": 0.016361354769822468,
"acc_norm": 0.39664804469273746,
"acc_norm_stderr": 0.016361354769822468
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7654320987654321,
"acc_stderr": 0.023576881744005723,
"acc_norm": 0.7654320987654321,
"acc_norm_stderr": 0.023576881744005723
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46740547588005216,
"acc_stderr": 0.012743072942653347,
"acc_norm": 0.46740547588005216,
"acc_norm_stderr": 0.012743072942653347
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7095588235294118,
"acc_stderr": 0.02757646862274054,
"acc_norm": 0.7095588235294118,
"acc_norm_stderr": 0.02757646862274054
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.01890101532209309,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.01890101532209309
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274645,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274645
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.029913127232368036,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.029913127232368036
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4675642594859241,
"mc1_stderr": 0.017466632149577613,
"mc2": 0.6291675924515658,
"mc2_stderr": 0.015571699922487066
},
"harness|winogrande|5": {
"acc": 0.7971586424625099,
"acc_stderr": 0.011301439925936652
},
"harness|gsm8k|5": {
"acc": 0.6671721000758151,
"acc_stderr": 0.012979892496598287
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_mlabonne__NeuralDarewin-7B | [
"region:us"
] | 2024-02-01T17:51:17+00:00 | {"pretty_name": "Evaluation run of mlabonne/NeuralDarewin-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [mlabonne/NeuralDarewin-7B](https://huggingface.co/mlabonne/NeuralDarewin-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mlabonne__NeuralDarewin-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:48:56.790250](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralDarewin-7B/blob/main/results_2024-02-01T17-48-56.790250.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6520779999242557,\n \"acc_stderr\": 0.03214755914196501,\n \"acc_norm\": 0.6530571027875921,\n \"acc_norm_stderr\": 0.03279768840920175,\n \"mc1\": 0.4675642594859241,\n \"mc1_stderr\": 0.017466632149577613,\n \"mc2\": 0.6291675924515658,\n \"mc2_stderr\": 0.015571699922487066\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6629692832764505,\n \"acc_stderr\": 0.013813476652902274,\n \"acc_norm\": 0.7013651877133106,\n \"acc_norm_stderr\": 0.01337407861506874\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.68442541326429,\n \"acc_stderr\": 0.004637944965914613,\n \"acc_norm\": 0.8639713204540929,\n \"acc_norm_stderr\": 0.0034211839093201612\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.04218506215368881,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.04218506215368881\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055263,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055263\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356853,\n \"acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356853\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.02371088850197057,\n \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.02371088850197057\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.02931820364520686,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.02931820364520686\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188704,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188704\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8550458715596331,\n \"acc_stderr\": 0.015094215699700476,\n \"acc_norm\": 0.8550458715596331,\n \"acc_norm_stderr\": 0.015094215699700476\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.03395322726375797,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.03395322726375797\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.02675640153807897,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.02675640153807897\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8185654008438819,\n \"acc_stderr\": 0.025085961144579647,\n \"acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.025085961144579647\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281376,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281376\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8403575989782887,\n \"acc_stderr\": 0.013097934513263005,\n \"acc_norm\": 0.8403575989782887,\n \"acc_norm_stderr\": 0.013097934513263005\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.023445826276545543,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.023445826276545543\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39664804469273746,\n \"acc_stderr\": 0.016361354769822468,\n \"acc_norm\": 0.39664804469273746,\n \"acc_norm_stderr\": 0.016361354769822468\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7654320987654321,\n \"acc_stderr\": 0.023576881744005723,\n \"acc_norm\": 0.7654320987654321,\n \"acc_norm_stderr\": 0.023576881744005723\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46740547588005216,\n \"acc_stderr\": 0.012743072942653347,\n \"acc_norm\": 0.46740547588005216,\n \"acc_norm_stderr\": 0.012743072942653347\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7095588235294118,\n \"acc_stderr\": 0.02757646862274054,\n \"acc_norm\": 0.7095588235294118,\n \"acc_norm_stderr\": 0.02757646862274054\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.01890101532209309,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.01890101532209309\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274645,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274645\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4675642594859241,\n \"mc1_stderr\": 0.017466632149577613,\n \"mc2\": 0.6291675924515658,\n \"mc2_stderr\": 0.015571699922487066\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7971586424625099,\n \"acc_stderr\": 0.011301439925936652\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6671721000758151,\n \"acc_stderr\": 0.012979892496598287\n }\n}\n```", "repo_url": "https://huggingface.co/mlabonne/NeuralDarewin-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-48-56.790250.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["**/details_harness|winogrande|5_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-48-56.790250.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_48_56.790250", "path": ["results_2024-02-01T17-48-56.790250.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-48-56.790250.parquet"]}]}]} | 2024-02-01T17:51:44+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mlabonne/NeuralDarewin-7B
Dataset automatically created during the evaluation run of model mlabonne/NeuralDarewin-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:48:56.790250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of mlabonne/NeuralDarewin-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralDarewin-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:48:56.790250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mlabonne/NeuralDarewin-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/NeuralDarewin-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:48:56.790250(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
c2300ba1210cad8b8b431bdd7cd3fd0c199ca921 |
# Dataset Card for Evaluation run of RatanRohith/NeuralPizza-7B-V0.3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [RatanRohith/NeuralPizza-7B-V0.3](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:49:35.277472](https://huggingface.co/datasets/open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3/blob/main/results_2024-02-01T17-49-35.277472.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6463893662332927,
"acc_stderr": 0.03224452934744884,
"acc_norm": 0.6479975016510882,
"acc_norm_stderr": 0.032891778674840784,
"mc1": 0.5140758873929009,
"mc1_stderr": 0.017496563717042776,
"mc2": 0.6793456051279607,
"mc2_stderr": 0.015369634410362739
},
"harness|arc:challenge|25": {
"acc": 0.6783276450511946,
"acc_stderr": 0.013650488084494162,
"acc_norm": 0.7107508532423208,
"acc_norm_stderr": 0.013250012579393441
},
"harness|hellaswag|10": {
"acc": 0.710017924716192,
"acc_stderr": 0.004528264116475881,
"acc_norm": 0.8738299143596893,
"acc_norm_stderr": 0.0033136235601649287
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047423976,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047423976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.027834912527544067,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.027834912527544067
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.0358687928008034,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.0358687928008034
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720683,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720683
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.0356760379963917,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.0356760379963917
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.049512182523962625,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.049512182523962625
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.04161808503501531,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.04161808503501531
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.025010749116137595,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.025010749116137595
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04444444444444449,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04444444444444449
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723295,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723295
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.0351760354036101,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.0351760354036101
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494563,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494563
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.021995311963644237,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.021995311963644237
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028742040903948485,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028742040903948485
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7310924369747899,
"acc_stderr": 0.028801392193631273,
"acc_norm": 0.7310924369747899,
"acc_norm_stderr": 0.028801392193631273
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8422018348623853,
"acc_stderr": 0.01563002297009244,
"acc_norm": 0.8422018348623853,
"acc_norm_stderr": 0.01563002297009244
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078966,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078966
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302872,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302872
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.02280138253459753,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.02280138253459753
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993457,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993457
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.02394851290546837,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.02394851290546837
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39553072625698327,
"acc_stderr": 0.016353415410075775,
"acc_norm": 0.39553072625698327,
"acc_norm_stderr": 0.016353415410075775
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.02573885479781873,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.02573885479781873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460842,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460842
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.02982074719142248,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.02982074719142248
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46479791395045633,
"acc_stderr": 0.012738547371303957,
"acc_norm": 0.46479791395045633,
"acc_norm_stderr": 0.012738547371303957
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.028332959514031204,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.028332959514031204
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6552287581699346,
"acc_stderr": 0.01922832201869664,
"acc_norm": 0.6552287581699346,
"acc_norm_stderr": 0.01922832201869664
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.02411267824090083,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.02411267824090083
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5140758873929009,
"mc1_stderr": 0.017496563717042776,
"mc2": 0.6793456051279607,
"mc2_stderr": 0.015369634410362739
},
"harness|winogrande|5": {
"acc": 0.8050513022888713,
"acc_stderr": 0.011134099415938273
},
"harness|gsm8k|5": {
"acc": 0.5890826383623957,
"acc_stderr": 0.01355213290142322
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3 | [
"region:us"
] | 2024-02-01T17:51:54+00:00 | {"pretty_name": "Evaluation run of RatanRohith/NeuralPizza-7B-V0.3", "dataset_summary": "Dataset automatically created during the evaluation run of model [RatanRohith/NeuralPizza-7B-V0.3](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:49:35.277472](https://huggingface.co/datasets/open-llm-leaderboard/details_RatanRohith__NeuralPizza-7B-V0.3/blob/main/results_2024-02-01T17-49-35.277472.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6463893662332927,\n \"acc_stderr\": 0.03224452934744884,\n \"acc_norm\": 0.6479975016510882,\n \"acc_norm_stderr\": 0.032891778674840784,\n \"mc1\": 0.5140758873929009,\n \"mc1_stderr\": 0.017496563717042776,\n \"mc2\": 0.6793456051279607,\n \"mc2_stderr\": 0.015369634410362739\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6783276450511946,\n \"acc_stderr\": 0.013650488084494162,\n \"acc_norm\": 0.7107508532423208,\n \"acc_norm_stderr\": 0.013250012579393441\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.710017924716192,\n \"acc_stderr\": 0.004528264116475881,\n \"acc_norm\": 0.8738299143596893,\n \"acc_norm_stderr\": 0.0033136235601649287\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.041539484047423976,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.041539484047423976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544067,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544067\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400351,\n \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400351\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.04161808503501531,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.04161808503501531\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137595,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137595\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.0351760354036101,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.0351760354036101\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644237,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644237\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948485,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948485\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7310924369747899,\n \"acc_stderr\": 0.028801392193631273,\n \"acc_norm\": 0.7310924369747899,\n \"acc_norm_stderr\": 0.028801392193631273\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8422018348623853,\n \"acc_stderr\": 0.01563002297009244,\n \"acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.01563002297009244\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.743801652892562,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.02280138253459753,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.02280138253459753\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n \"acc_stderr\": 0.013625556907993457,\n \"acc_norm\": 0.8237547892720306,\n \"acc_norm_stderr\": 0.013625556907993457\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.02394851290546837,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.02394851290546837\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39553072625698327,\n \"acc_stderr\": 0.016353415410075775,\n \"acc_norm\": 0.39553072625698327,\n \"acc_norm_stderr\": 0.016353415410075775\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.02573885479781873,\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.02573885479781873\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460842,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460842\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.02982074719142248,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.02982074719142248\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n \"acc_stderr\": 0.012738547371303957,\n \"acc_norm\": 0.46479791395045633,\n \"acc_norm_stderr\": 0.012738547371303957\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.028332959514031204,\n \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.028332959514031204\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6552287581699346,\n \"acc_stderr\": 0.01922832201869664,\n \"acc_norm\": 0.6552287581699346,\n \"acc_norm_stderr\": 0.01922832201869664\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n \"acc_stderr\": 0.02411267824090083,\n \"acc_norm\": 0.8656716417910447,\n \"acc_norm_stderr\": 0.02411267824090083\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5140758873929009,\n \"mc1_stderr\": 0.017496563717042776,\n \"mc2\": 0.6793456051279607,\n \"mc2_stderr\": 0.015369634410362739\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8050513022888713,\n \"acc_stderr\": 0.011134099415938273\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5890826383623957,\n \"acc_stderr\": 0.01355213290142322\n }\n}\n```", "repo_url": "https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["**/details_harness|winogrande|5_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-49-35.277472.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_49_35.277472", "path": ["results_2024-02-01T17-49-35.277472.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-49-35.277472.parquet"]}]}]} | 2024-02-01T17:52:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of RatanRohith/NeuralPizza-7B-V0.3
Dataset automatically created during the evaluation run of model RatanRohith/NeuralPizza-7B-V0.3 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:49:35.277472(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of RatanRohith/NeuralPizza-7B-V0.3\n\n\n\nDataset automatically created during the evaluation run of model RatanRohith/NeuralPizza-7B-V0.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:49:35.277472(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of RatanRohith/NeuralPizza-7B-V0.3\n\n\n\nDataset automatically created during the evaluation run of model RatanRohith/NeuralPizza-7B-V0.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:49:35.277472(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
8ab8acb86c9894444f3e8345ecb38547348d3603 |
# Dataset Card for Evaluation run of lamhieu/ghost-7b-v0.9.0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [lamhieu/ghost-7b-v0.9.0](https://huggingface.co/lamhieu/ghost-7b-v0.9.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T17:50:44.669359](https://huggingface.co/datasets/open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0/blob/main/results_2024-02-01T17-50-44.669359.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5499871299235607,
"acc_stderr": 0.03407586587227753,
"acc_norm": 0.5544447274332273,
"acc_norm_stderr": 0.03478665284686247,
"mc1": 0.3292533659730722,
"mc1_stderr": 0.016451264440068232,
"mc2": 0.4779306640850261,
"mc2_stderr": 0.015098925727831657
},
"harness|arc:challenge|25": {
"acc": 0.49658703071672355,
"acc_stderr": 0.014611050403244077,
"acc_norm": 0.5307167235494881,
"acc_norm_stderr": 0.014583792546304037
},
"harness|hellaswag|10": {
"acc": 0.5758812985461064,
"acc_stderr": 0.004931984642695341,
"acc_norm": 0.7793268273252341,
"acc_norm_stderr": 0.004138529919075824
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768081,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768081
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4962962962962963,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.4962962962962963,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.04063302731486671,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.04063302731486671
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.630188679245283,
"acc_stderr": 0.029711421880107933,
"acc_norm": 0.630188679245283,
"acc_norm_stderr": 0.029711421880107933
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5625,
"acc_stderr": 0.04148415739394154,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04148415739394154
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5722543352601156,
"acc_stderr": 0.037724468575180276,
"acc_norm": 0.5722543352601156,
"acc_norm_stderr": 0.037724468575180276
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929776,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929776
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4425531914893617,
"acc_stderr": 0.03246956919789958,
"acc_norm": 0.4425531914893617,
"acc_norm_stderr": 0.03246956919789958
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3386243386243386,
"acc_stderr": 0.024373197867983046,
"acc_norm": 0.3386243386243386,
"acc_norm_stderr": 0.024373197867983046
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6612903225806451,
"acc_stderr": 0.02692344605930284,
"acc_norm": 0.6612903225806451,
"acc_norm_stderr": 0.02692344605930284
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3842364532019704,
"acc_stderr": 0.0342239856565755,
"acc_norm": 0.3842364532019704,
"acc_norm_stderr": 0.0342239856565755
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6848484848484848,
"acc_stderr": 0.0362773057502241,
"acc_norm": 0.6848484848484848,
"acc_norm_stderr": 0.0362773057502241
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533086,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533086
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7772020725388601,
"acc_stderr": 0.03003114797764154,
"acc_norm": 0.7772020725388601,
"acc_norm_stderr": 0.03003114797764154
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5051282051282051,
"acc_stderr": 0.02534967290683865,
"acc_norm": 0.5051282051282051,
"acc_norm_stderr": 0.02534967290683865
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.02803792996911499,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.02803792996911499
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5210084033613446,
"acc_stderr": 0.03244980849990029,
"acc_norm": 0.5210084033613446,
"acc_norm_stderr": 0.03244980849990029
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7339449541284404,
"acc_stderr": 0.0189460223222256,
"acc_norm": 0.7339449541284404,
"acc_norm_stderr": 0.0189460223222256
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.03242661719827218,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.03242661719827218
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.729957805907173,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.729957805907173,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6322869955156951,
"acc_stderr": 0.03236198350928276,
"acc_norm": 0.6322869955156951,
"acc_norm_stderr": 0.03236198350928276
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6335877862595419,
"acc_stderr": 0.04225875451969638,
"acc_norm": 0.6335877862595419,
"acc_norm_stderr": 0.04225875451969638
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6776859504132231,
"acc_stderr": 0.04266416363352168,
"acc_norm": 0.6776859504132231,
"acc_norm_stderr": 0.04266416363352168
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6203703703703703,
"acc_stderr": 0.04691521224077742,
"acc_norm": 0.6203703703703703,
"acc_norm_stderr": 0.04691521224077742
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6319018404907976,
"acc_stderr": 0.03789213935838396,
"acc_norm": 0.6319018404907976,
"acc_norm_stderr": 0.03789213935838396
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7087378640776699,
"acc_stderr": 0.044986763205729245,
"acc_norm": 0.7087378640776699,
"acc_norm_stderr": 0.044986763205729245
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.024414947304543674,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.024414947304543674
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7266922094508301,
"acc_stderr": 0.015936681062628556,
"acc_norm": 0.7266922094508301,
"acc_norm_stderr": 0.015936681062628556
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.026296227915613663,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.026296227915613663
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2860335195530726,
"acc_stderr": 0.015113972129062129,
"acc_norm": 0.2860335195530726,
"acc_norm_stderr": 0.015113972129062129
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5849673202614379,
"acc_stderr": 0.028213504177824093,
"acc_norm": 0.5849673202614379,
"acc_norm_stderr": 0.028213504177824093
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.026981478043648033,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.026981478043648033
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.027125115513166844,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.027125115513166844
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4326241134751773,
"acc_stderr": 0.02955545423677885,
"acc_norm": 0.4326241134751773,
"acc_norm_stderr": 0.02955545423677885
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.378748370273794,
"acc_stderr": 0.012389052105003732,
"acc_norm": 0.378748370273794,
"acc_norm_stderr": 0.012389052105003732
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5367647058823529,
"acc_stderr": 0.030290619180485687,
"acc_norm": 0.5367647058823529,
"acc_norm_stderr": 0.030290619180485687
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5571895424836601,
"acc_stderr": 0.02009508315457735,
"acc_norm": 0.5571895424836601,
"acc_norm_stderr": 0.02009508315457735
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.04653429807913508,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.04653429807913508
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5346938775510204,
"acc_stderr": 0.03193207024425314,
"acc_norm": 0.5346938775510204,
"acc_norm_stderr": 0.03193207024425314
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7761194029850746,
"acc_stderr": 0.0294752502360172,
"acc_norm": 0.7761194029850746,
"acc_norm_stderr": 0.0294752502360172
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.41566265060240964,
"acc_stderr": 0.03836722176598052,
"acc_norm": 0.41566265060240964,
"acc_norm_stderr": 0.03836722176598052
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7426900584795322,
"acc_stderr": 0.03352799844161865,
"acc_norm": 0.7426900584795322,
"acc_norm_stderr": 0.03352799844161865
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3292533659730722,
"mc1_stderr": 0.016451264440068232,
"mc2": 0.4779306640850261,
"mc2_stderr": 0.015098925727831657
},
"harness|winogrande|5": {
"acc": 0.7371744277821626,
"acc_stderr": 0.01237092252726201
},
"harness|gsm8k|5": {
"acc": 0.3373768006065201,
"acc_stderr": 0.013023665136222093
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0 | [
"region:us"
] | 2024-02-01T17:53:05+00:00 | {"pretty_name": "Evaluation run of lamhieu/ghost-7b-v0.9.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [lamhieu/ghost-7b-v0.9.0](https://huggingface.co/lamhieu/ghost-7b-v0.9.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T17:50:44.669359](https://huggingface.co/datasets/open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0/blob/main/results_2024-02-01T17-50-44.669359.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5499871299235607,\n \"acc_stderr\": 0.03407586587227753,\n \"acc_norm\": 0.5544447274332273,\n \"acc_norm_stderr\": 0.03478665284686247,\n \"mc1\": 0.3292533659730722,\n \"mc1_stderr\": 0.016451264440068232,\n \"mc2\": 0.4779306640850261,\n \"mc2_stderr\": 0.015098925727831657\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.49658703071672355,\n \"acc_stderr\": 0.014611050403244077,\n \"acc_norm\": 0.5307167235494881,\n \"acc_norm_stderr\": 0.014583792546304037\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5758812985461064,\n \"acc_stderr\": 0.004931984642695341,\n \"acc_norm\": 0.7793268273252341,\n \"acc_norm_stderr\": 0.004138529919075824\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768081,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768081\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4962962962962963,\n \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.4962962962962963,\n \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.04063302731486671,\n \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.04063302731486671\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.630188679245283,\n \"acc_stderr\": 0.029711421880107933,\n \"acc_norm\": 0.630188679245283,\n \"acc_norm_stderr\": 0.029711421880107933\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.04148415739394154,\n \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.04148415739394154\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n \"acc_stderr\": 0.037724468575180276,\n \"acc_norm\": 0.5722543352601156,\n \"acc_norm_stderr\": 0.037724468575180276\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929776,\n \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929776\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4425531914893617,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.4425531914893617,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3386243386243386,\n \"acc_stderr\": 0.024373197867983046,\n \"acc_norm\": 0.3386243386243386,\n \"acc_norm_stderr\": 0.024373197867983046\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6612903225806451,\n \"acc_stderr\": 0.02692344605930284,\n \"acc_norm\": 0.6612903225806451,\n \"acc_norm_stderr\": 0.02692344605930284\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3842364532019704,\n \"acc_stderr\": 0.0342239856565755,\n \"acc_norm\": 0.3842364532019704,\n \"acc_norm_stderr\": 0.0342239856565755\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6848484848484848,\n \"acc_stderr\": 0.0362773057502241,\n \"acc_norm\": 0.6848484848484848,\n \"acc_norm_stderr\": 0.0362773057502241\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533086,\n \"acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533086\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7772020725388601,\n \"acc_stderr\": 0.03003114797764154,\n \"acc_norm\": 0.7772020725388601,\n \"acc_norm_stderr\": 0.03003114797764154\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5051282051282051,\n \"acc_stderr\": 0.02534967290683865,\n \"acc_norm\": 0.5051282051282051,\n \"acc_norm_stderr\": 0.02534967290683865\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.02803792996911499,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.02803792996911499\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5210084033613446,\n \"acc_stderr\": 0.03244980849990029,\n \"acc_norm\": 0.5210084033613446,\n \"acc_norm_stderr\": 0.03244980849990029\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7339449541284404,\n \"acc_stderr\": 0.0189460223222256,\n \"acc_norm\": 0.7339449541284404,\n \"acc_norm_stderr\": 0.0189460223222256\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.03242661719827218,\n \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.03242661719827218\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.729957805907173,\n \"acc_stderr\": 0.028900721906293426,\n \"acc_norm\": 0.729957805907173,\n \"acc_norm_stderr\": 0.028900721906293426\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6322869955156951,\n \"acc_stderr\": 0.03236198350928276,\n \"acc_norm\": 0.6322869955156951,\n \"acc_norm_stderr\": 0.03236198350928276\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6335877862595419,\n \"acc_stderr\": 0.04225875451969638,\n \"acc_norm\": 0.6335877862595419,\n \"acc_norm_stderr\": 0.04225875451969638\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6776859504132231,\n \"acc_stderr\": 0.04266416363352168,\n \"acc_norm\": 0.6776859504132231,\n \"acc_norm_stderr\": 0.04266416363352168\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6203703703703703,\n \"acc_stderr\": 0.04691521224077742,\n \"acc_norm\": 0.6203703703703703,\n \"acc_norm_stderr\": 0.04691521224077742\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6319018404907976,\n \"acc_stderr\": 0.03789213935838396,\n \"acc_norm\": 0.6319018404907976,\n \"acc_norm_stderr\": 0.03789213935838396\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.044986763205729245,\n \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.044986763205729245\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.024414947304543674,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.024414947304543674\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7266922094508301,\n \"acc_stderr\": 0.015936681062628556,\n \"acc_norm\": 0.7266922094508301,\n \"acc_norm_stderr\": 0.015936681062628556\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.026296227915613663,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.026296227915613663\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2860335195530726,\n \"acc_stderr\": 0.015113972129062129,\n \"acc_norm\": 0.2860335195530726,\n \"acc_norm_stderr\": 0.015113972129062129\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5849673202614379,\n \"acc_stderr\": 0.028213504177824093,\n \"acc_norm\": 0.5849673202614379,\n \"acc_norm_stderr\": 0.028213504177824093\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.026981478043648033,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.026981478043648033\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.027125115513166844,\n \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.027125115513166844\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4326241134751773,\n \"acc_stderr\": 0.02955545423677885,\n \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.02955545423677885\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.378748370273794,\n \"acc_stderr\": 0.012389052105003732,\n \"acc_norm\": 0.378748370273794,\n \"acc_norm_stderr\": 0.012389052105003732\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5367647058823529,\n \"acc_stderr\": 0.030290619180485687,\n \"acc_norm\": 0.5367647058823529,\n \"acc_norm_stderr\": 0.030290619180485687\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5571895424836601,\n \"acc_stderr\": 0.02009508315457735,\n \"acc_norm\": 0.5571895424836601,\n \"acc_norm_stderr\": 0.02009508315457735\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.04653429807913508,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.04653429807913508\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5346938775510204,\n \"acc_stderr\": 0.03193207024425314,\n \"acc_norm\": 0.5346938775510204,\n \"acc_norm_stderr\": 0.03193207024425314\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7761194029850746,\n \"acc_stderr\": 0.0294752502360172,\n \"acc_norm\": 0.7761194029850746,\n \"acc_norm_stderr\": 0.0294752502360172\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.41566265060240964,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.41566265060240964,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7426900584795322,\n \"acc_stderr\": 0.03352799844161865,\n \"acc_norm\": 0.7426900584795322,\n \"acc_norm_stderr\": 0.03352799844161865\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3292533659730722,\n \"mc1_stderr\": 0.016451264440068232,\n \"mc2\": 0.4779306640850261,\n \"mc2_stderr\": 0.015098925727831657\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7371744277821626,\n \"acc_stderr\": 0.01237092252726201\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3373768006065201,\n \"acc_stderr\": 0.013023665136222093\n }\n}\n```", "repo_url": "https://huggingface.co/lamhieu/ghost-7b-v0.9.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T17-50-44.669359.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["**/details_harness|winogrande|5_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T17-50-44.669359.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T17_50_44.669359", "path": ["results_2024-02-01T17-50-44.669359.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T17-50-44.669359.parquet"]}]}]} | 2024-02-01T17:53:28+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of lamhieu/ghost-7b-v0.9.0
Dataset automatically created during the evaluation run of model lamhieu/ghost-7b-v0.9.0 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T17:50:44.669359(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of lamhieu/ghost-7b-v0.9.0\n\n\n\nDataset automatically created during the evaluation run of model lamhieu/ghost-7b-v0.9.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:50:44.669359(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of lamhieu/ghost-7b-v0.9.0\n\n\n\nDataset automatically created during the evaluation run of model lamhieu/ghost-7b-v0.9.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T17:50:44.669359(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
65fd5148f870c6b928f14be74ad40b0839661e21 |
# Dataset Card for Evaluation run of openagi-project/OpenAGI-7B-v0.2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [openagi-project/OpenAGI-7B-v0.2](https://huggingface.co/openagi-project/OpenAGI-7B-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openagi-project__OpenAGI-7B-v0.2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T18:03:01.560923](https://huggingface.co/datasets/open-llm-leaderboard/details_openagi-project__OpenAGI-7B-v0.2/blob/main/results_2024-02-01T18-03-01.560923.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.632889617920343,
"acc_stderr": 0.03254482538442493,
"acc_norm": 0.63500329056049,
"acc_norm_stderr": 0.0331984716026879,
"mc1": 0.5593635250917993,
"mc1_stderr": 0.01737969755543745,
"mc2": 0.7204002538063481,
"mc2_stderr": 0.015000816890913878
},
"harness|arc:challenge|25": {
"acc": 0.6621160409556314,
"acc_stderr": 0.013822047922283512,
"acc_norm": 0.6851535836177475,
"acc_norm_stderr": 0.013572657703084948
},
"harness|hellaswag|10": {
"acc": 0.6982672774347739,
"acc_stderr": 0.004580718115992504,
"acc_norm": 0.8602867954590719,
"acc_norm_stderr": 0.003459806991389835
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493864,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.037038511930995215,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.037038511930995215
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287533,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287533
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.032469569197899575,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.032469569197899575
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878151,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878151
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.025225450284067884,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.025225450284067884
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7258064516129032,
"acc_stderr": 0.0253781399708852,
"acc_norm": 0.7258064516129032,
"acc_norm_stderr": 0.0253781399708852
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.029620227874790486,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.029620227874790486
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.024233532297758723,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.024233532297758723
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6076923076923076,
"acc_stderr": 0.02475600038213095,
"acc_norm": 0.6076923076923076,
"acc_norm_stderr": 0.02475600038213095
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.818348623853211,
"acc_stderr": 0.016530617409266857,
"acc_norm": 0.818348623853211,
"acc_norm_stderr": 0.016530617409266857
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639325,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639325
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.02675082699467617,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.02675082699467617
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.672645739910314,
"acc_stderr": 0.03149384670994131,
"acc_norm": 0.672645739910314,
"acc_norm_stderr": 0.03149384670994131
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8264462809917356,
"acc_stderr": 0.03457272836917669,
"acc_norm": 0.8264462809917356,
"acc_norm_stderr": 0.03457272836917669
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.034089978868575295,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.034089978868575295
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.020237149008990925,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.020237149008990925
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8212005108556832,
"acc_stderr": 0.013702643715368983,
"acc_norm": 0.8212005108556832,
"acc_norm_stderr": 0.013702643715368983
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7109826589595376,
"acc_stderr": 0.02440517393578323,
"acc_norm": 0.7109826589595376,
"acc_norm_stderr": 0.02440517393578323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3776536312849162,
"acc_stderr": 0.01621414875213663,
"acc_norm": 0.3776536312849162,
"acc_norm_stderr": 0.01621414875213663
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.02625605383571896,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.02625605383571896
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.026160584450140446,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.026160584450140446
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.025171041915309684,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.025171041915309684
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45827900912646674,
"acc_stderr": 0.01272570165695364,
"acc_norm": 0.45827900912646674,
"acc_norm_stderr": 0.01272570165695364
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.029029422815681397,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.029029422815681397
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6584967320261438,
"acc_stderr": 0.019184639328092487,
"acc_norm": 0.6584967320261438,
"acc_norm_stderr": 0.019184639328092487
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.043091187099464585,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.043091187099464585
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128438,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128438
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.02899690969332891,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.02899690969332891
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.847953216374269,
"acc_stderr": 0.02753912288906145,
"acc_norm": 0.847953216374269,
"acc_norm_stderr": 0.02753912288906145
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5593635250917993,
"mc1_stderr": 0.01737969755543745,
"mc2": 0.7204002538063481,
"mc2_stderr": 0.015000816890913878
},
"harness|winogrande|5": {
"acc": 0.7916337805840569,
"acc_stderr": 0.011414554399987736
},
"harness|gsm8k|5": {
"acc": 0.5344958301743745,
"acc_stderr": 0.013739668147545913
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_openagi-project__OpenAGI-7B-v0.2 | [
"region:us"
] | 2024-02-01T18:05:23+00:00 | {"pretty_name": "Evaluation run of openagi-project/OpenAGI-7B-v0.2", "dataset_summary": "Dataset automatically created during the evaluation run of model [openagi-project/OpenAGI-7B-v0.2](https://huggingface.co/openagi-project/OpenAGI-7B-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openagi-project__OpenAGI-7B-v0.2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-02-01T18:03:01.560923](https://huggingface.co/datasets/open-llm-leaderboard/details_openagi-project__OpenAGI-7B-v0.2/blob/main/results_2024-02-01T18-03-01.560923.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.632889617920343,\n \"acc_stderr\": 0.03254482538442493,\n \"acc_norm\": 0.63500329056049,\n \"acc_norm_stderr\": 0.0331984716026879,\n \"mc1\": 0.5593635250917993,\n \"mc1_stderr\": 0.01737969755543745,\n \"mc2\": 0.7204002538063481,\n \"mc2_stderr\": 0.015000816890913878\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6621160409556314,\n \"acc_stderr\": 0.013822047922283512,\n \"acc_norm\": 0.6851535836177475,\n \"acc_norm_stderr\": 0.013572657703084948\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6982672774347739,\n \"acc_stderr\": 0.004580718115992504,\n \"acc_norm\": 0.8602867954590719,\n \"acc_norm_stderr\": 0.003459806991389835\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493864,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493864\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.037038511930995215,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.037038511930995215\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287533,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287533\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.032469569197899575,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.032469569197899575\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878151,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878151\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067884,\n \"acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067884\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7258064516129032,\n \"acc_stderr\": 0.0253781399708852,\n \"acc_norm\": 0.7258064516129032,\n \"acc_norm_stderr\": 0.0253781399708852\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.029620227874790486,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.029620227874790486\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758723,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758723\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.02475600038213095,\n \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.02475600038213095\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.818348623853211,\n \"acc_stderr\": 0.016530617409266857,\n \"acc_norm\": 0.818348623853211,\n \"acc_norm_stderr\": 0.016530617409266857\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639325,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639325\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.02675082699467617,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.02675082699467617\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n \"acc_stderr\": 0.03149384670994131,\n \"acc_norm\": 0.672645739910314,\n \"acc_norm_stderr\": 0.03149384670994131\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8264462809917356,\n \"acc_stderr\": 0.03457272836917669,\n \"acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.03457272836917669\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.034089978868575295,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.034089978868575295\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n \"acc_stderr\": 0.020237149008990925,\n \"acc_norm\": 0.8931623931623932,\n \"acc_norm_stderr\": 0.020237149008990925\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n \"acc_stderr\": 0.013702643715368983,\n \"acc_norm\": 0.8212005108556832,\n \"acc_norm_stderr\": 0.013702643715368983\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3776536312849162,\n \"acc_stderr\": 0.01621414875213663,\n \"acc_norm\": 0.3776536312849162,\n \"acc_norm_stderr\": 0.01621414875213663\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.02625605383571896,\n \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.02625605383571896\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.026160584450140446,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.026160584450140446\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.025171041915309684,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.025171041915309684\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45827900912646674,\n \"acc_stderr\": 0.01272570165695364,\n \"acc_norm\": 0.45827900912646674,\n \"acc_norm_stderr\": 0.01272570165695364\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.029029422815681397,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.029029422815681397\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6584967320261438,\n \"acc_stderr\": 0.019184639328092487,\n \"acc_norm\": 0.6584967320261438,\n \"acc_norm_stderr\": 0.019184639328092487\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128438,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128438\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n \"acc_stderr\": 0.02899690969332891,\n \"acc_norm\": 0.7860696517412935,\n \"acc_norm_stderr\": 0.02899690969332891\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.02753912288906145,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.02753912288906145\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5593635250917993,\n \"mc1_stderr\": 0.01737969755543745,\n \"mc2\": 0.7204002538063481,\n \"mc2_stderr\": 0.015000816890913878\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7916337805840569,\n \"acc_stderr\": 0.011414554399987736\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5344958301743745,\n \"acc_stderr\": 0.013739668147545913\n }\n}\n```", "repo_url": "https://huggingface.co/openagi-project/OpenAGI-7B-v0.2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|arc:challenge|25_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|gsm8k|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hellaswag|10_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-management|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-virology|5_2024-02-01T18-03-01.560923.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["**/details_harness|winogrande|5_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-02-01T18-03-01.560923.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_02_01T18_03_01.560923", "path": ["results_2024-02-01T18-03-01.560923.parquet"]}, {"split": "latest", "path": ["results_2024-02-01T18-03-01.560923.parquet"]}]}]} | 2024-02-01T18:05:46+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of openagi-project/OpenAGI-7B-v0.2
Dataset automatically created during the evaluation run of model openagi-project/OpenAGI-7B-v0.2 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-02-01T18:03:01.560923(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of openagi-project/OpenAGI-7B-v0.2\n\n\n\nDataset automatically created during the evaluation run of model openagi-project/OpenAGI-7B-v0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T18:03:01.560923(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of openagi-project/OpenAGI-7B-v0.2\n\n\n\nDataset automatically created during the evaluation run of model openagi-project/OpenAGI-7B-v0.2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-02-01T18:03:01.560923(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.