sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
f032d8270f07bc88b353631f17b47c0cf364e989
# Dataset Card for Evaluation run of jsfs11/TurdusTrixBeagle-DARETIES-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [jsfs11/TurdusTrixBeagle-DARETIES-7B](https://huggingface.co/jsfs11/TurdusTrixBeagle-DARETIES-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jsfs11__TurdusTrixBeagle-DARETIES-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T06:52:34.475524](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__TurdusTrixBeagle-DARETIES-7B/blob/main/results_2024-01-24T06-52-34.475524.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.655345770405219, "acc_stderr": 0.032004831458594445, "acc_norm": 0.6544154239232413, "acc_norm_stderr": 0.03267916416687105, "mc1": 0.5593635250917993, "mc1_stderr": 0.017379697555437442, "mc2": 0.6881243184665276, "mc2_stderr": 0.015188166386714394 }, "harness|arc:challenge|25": { "acc": 0.71160409556314, "acc_stderr": 0.013238394422428173, "acc_norm": 0.734641638225256, "acc_norm_stderr": 0.012902554762313962 }, "harness|hellaswag|10": { "acc": 0.7202748456482773, "acc_stderr": 0.0044794676194648, "acc_norm": 0.8860784704242183, "acc_norm_stderr": 0.0031706661225176552 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6666666666666666, "acc_stderr": 0.04072314811876837, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7039473684210527, "acc_stderr": 0.03715062154998904, "acc_norm": 0.7039473684210527, "acc_norm_stderr": 0.03715062154998904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695238, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695238 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7283018867924528, "acc_stderr": 0.027377706624670713, "acc_norm": 0.7283018867924528, "acc_norm_stderr": 0.027377706624670713 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7986111111111112, "acc_stderr": 0.03353647469713839, "acc_norm": 0.7986111111111112, "acc_norm_stderr": 0.03353647469713839 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.53, "acc_stderr": 0.050161355804659205, "acc_norm": 0.53, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6763005780346821, "acc_stderr": 0.035676037996391706, "acc_norm": 0.6763005780346821, "acc_norm_stderr": 0.035676037996391706 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.574468085106383, "acc_stderr": 0.03232146916224468, "acc_norm": 0.574468085106383, "acc_norm_stderr": 0.03232146916224468 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555498, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555498 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4312169312169312, "acc_stderr": 0.025506481698138208, "acc_norm": 0.4312169312169312, "acc_norm_stderr": 0.025506481698138208 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.47619047619047616, "acc_stderr": 0.04467062628403273, "acc_norm": 0.47619047619047616, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7935483870967742, "acc_stderr": 0.023025899617188723, "acc_norm": 0.7935483870967742, "acc_norm_stderr": 0.023025899617188723 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5172413793103449, "acc_stderr": 0.035158955511656986, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.035158955511656986 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.03317505930009182, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.03317505930009182 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8080808080808081, "acc_stderr": 0.028057791672989017, "acc_norm": 0.8080808080808081, "acc_norm_stderr": 0.028057791672989017 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9067357512953368, "acc_stderr": 0.020986854593289733, "acc_norm": 0.9067357512953368, "acc_norm_stderr": 0.020986854593289733 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6692307692307692, "acc_stderr": 0.02385479568097112, "acc_norm": 0.6692307692307692, "acc_norm_stderr": 0.02385479568097112 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3296296296296296, "acc_stderr": 0.02866120111652457, "acc_norm": 0.3296296296296296, "acc_norm_stderr": 0.02866120111652457 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6764705882352942, "acc_stderr": 0.030388353551886793, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.030388353551886793 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8422018348623853, "acc_stderr": 0.015630022970092444, "acc_norm": 0.8422018348623853, "acc_norm_stderr": 0.015630022970092444 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5185185185185185, "acc_stderr": 0.03407632093854051, "acc_norm": 0.5185185185185185, "acc_norm_stderr": 0.03407632093854051 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8333333333333334, "acc_stderr": 0.026156867523931045, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.026156867523931045 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7890295358649789, "acc_stderr": 0.026558372502661916, "acc_norm": 0.7890295358649789, "acc_norm_stderr": 0.026558372502661916 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.695067264573991, "acc_stderr": 0.030898610882477515, "acc_norm": 0.695067264573991, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7862595419847328, "acc_stderr": 0.0359546161177469, "acc_norm": 0.7862595419847328, "acc_norm_stderr": 0.0359546161177469 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990946, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990946 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.0401910747255735, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.0401910747255735 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7484662576687117, "acc_stderr": 0.03408997886857529, "acc_norm": 0.7484662576687117, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.7669902912621359, "acc_stderr": 0.04185832598928315, "acc_norm": 0.7669902912621359, "acc_norm_stderr": 0.04185832598928315 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8846153846153846, "acc_stderr": 0.02093019318517933, "acc_norm": 0.8846153846153846, "acc_norm_stderr": 0.02093019318517933 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.72, "acc_stderr": 0.045126085985421276, "acc_norm": 0.72, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.822477650063857, "acc_stderr": 0.013664230995834834, "acc_norm": 0.822477650063857, "acc_norm_stderr": 0.013664230995834834 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.023948512905468365, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.023948512905468365 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4201117318435754, "acc_stderr": 0.016507671073256402, "acc_norm": 0.4201117318435754, "acc_norm_stderr": 0.016507671073256402 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7058823529411765, "acc_stderr": 0.026090162504279053, "acc_norm": 0.7058823529411765, "acc_norm_stderr": 0.026090162504279053 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.02549425935069491, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.02549425935069491 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7438271604938271, "acc_stderr": 0.0242885336377261, "acc_norm": 0.7438271604938271, "acc_norm_stderr": 0.0242885336377261 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.47392438070404175, "acc_stderr": 0.01275285834653313, "acc_norm": 0.47392438070404175, "acc_norm_stderr": 0.01275285834653313 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6654411764705882, "acc_stderr": 0.028661996202335303, "acc_norm": 0.6654411764705882, "acc_norm_stderr": 0.028661996202335303 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6650326797385621, "acc_stderr": 0.019094228167000325, "acc_norm": 0.6650326797385621, "acc_norm_stderr": 0.019094228167000325 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7306122448979592, "acc_stderr": 0.02840125202902294, "acc_norm": 0.7306122448979592, "acc_norm_stderr": 0.02840125202902294 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.025870646766169136, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.025870646766169136 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.572289156626506, "acc_stderr": 0.038515976837185335, "acc_norm": 0.572289156626506, "acc_norm_stderr": 0.038515976837185335 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.5593635250917993, "mc1_stderr": 0.017379697555437442, "mc2": 0.6881243184665276, "mc2_stderr": 0.015188166386714394 }, "harness|winogrande|5": { "acc": 0.8516179952644041, "acc_stderr": 0.009990706005184135 }, "harness|gsm8k|5": { "acc": 0.7028051554207733, "acc_stderr": 0.012588685966624184 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_jsfs11__TurdusTrixBeagle-DARETIES-7B
[ "region:us" ]
2024-01-24T06:54:53+00:00
{"pretty_name": "Evaluation run of jsfs11/TurdusTrixBeagle-DARETIES-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [jsfs11/TurdusTrixBeagle-DARETIES-7B](https://huggingface.co/jsfs11/TurdusTrixBeagle-DARETIES-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jsfs11__TurdusTrixBeagle-DARETIES-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T06:52:34.475524](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__TurdusTrixBeagle-DARETIES-7B/blob/main/results_2024-01-24T06-52-34.475524.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.655345770405219,\n \"acc_stderr\": 0.032004831458594445,\n \"acc_norm\": 0.6544154239232413,\n \"acc_norm_stderr\": 0.03267916416687105,\n \"mc1\": 0.5593635250917993,\n \"mc1_stderr\": 0.017379697555437442,\n \"mc2\": 0.6881243184665276,\n \"mc2_stderr\": 0.015188166386714394\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.71160409556314,\n \"acc_stderr\": 0.013238394422428173,\n \"acc_norm\": 0.734641638225256,\n \"acc_norm_stderr\": 0.012902554762313962\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7202748456482773,\n \"acc_stderr\": 0.0044794676194648,\n \"acc_norm\": 0.8860784704242183,\n \"acc_norm_stderr\": 0.0031706661225176552\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695238,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695238\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7283018867924528,\n \"acc_stderr\": 0.027377706624670713,\n \"acc_norm\": 0.7283018867924528,\n \"acc_norm_stderr\": 0.027377706624670713\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7986111111111112,\n \"acc_stderr\": 0.03353647469713839,\n \"acc_norm\": 0.7986111111111112,\n \"acc_norm_stderr\": 0.03353647469713839\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138208,\n \"acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138208\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n \"acc_stderr\": 0.023025899617188723,\n \"acc_norm\": 0.7935483870967742,\n \"acc_norm_stderr\": 0.023025899617188723\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.020986854593289733,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.020986854593289733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.02385479568097112,\n \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.02385479568097112\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652457,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652457\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8422018348623853,\n \"acc_stderr\": 0.015630022970092444,\n \"acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.015630022970092444\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.026558372502661916,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.026558372502661916\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n \"acc_stderr\": 0.013664230995834834,\n \"acc_norm\": 0.822477650063857,\n \"acc_norm_stderr\": 0.013664230995834834\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4201117318435754,\n \"acc_stderr\": 0.016507671073256402,\n \"acc_norm\": 0.4201117318435754,\n \"acc_norm_stderr\": 0.016507671073256402\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.026090162504279053,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.026090162504279053\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.0242885336377261,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.0242885336377261\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47392438070404175,\n \"acc_stderr\": 0.01275285834653313,\n \"acc_norm\": 0.47392438070404175,\n \"acc_norm_stderr\": 0.01275285834653313\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.028661996202335303,\n \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.028661996202335303\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6650326797385621,\n \"acc_stderr\": 0.019094228167000325,\n \"acc_norm\": 0.6650326797385621,\n \"acc_norm_stderr\": 0.019094228167000325\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5593635250917993,\n \"mc1_stderr\": 0.017379697555437442,\n \"mc2\": 0.6881243184665276,\n \"mc2_stderr\": 0.015188166386714394\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8516179952644041,\n \"acc_stderr\": 0.009990706005184135\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7028051554207733,\n \"acc_stderr\": 0.012588685966624184\n }\n}\n```", "repo_url": "https://huggingface.co/jsfs11/TurdusTrixBeagle-DARETIES-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|arc:challenge|25_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|gsm8k|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hellaswag|10_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T06-52-34.475524.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["**/details_harness|winogrande|5_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T06-52-34.475524.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T06_52_34.475524", "path": ["results_2024-01-24T06-52-34.475524.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T06-52-34.475524.parquet"]}]}]}
2024-01-24T06:55:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jsfs11/TurdusTrixBeagle-DARETIES-7B Dataset automatically created during the evaluation run of model jsfs11/TurdusTrixBeagle-DARETIES-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T06:52:34.475524(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of jsfs11/TurdusTrixBeagle-DARETIES-7B\n\n\n\nDataset automatically created during the evaluation run of model jsfs11/TurdusTrixBeagle-DARETIES-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T06:52:34.475524(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jsfs11/TurdusTrixBeagle-DARETIES-7B\n\n\n\nDataset automatically created during the evaluation run of model jsfs11/TurdusTrixBeagle-DARETIES-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T06:52:34.475524(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
382c65cb95f62a7bd6be949898189f1a8d39069d
# Dataset Card for "ultrafeedback_binarized_relabelled_ultrarm" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Asap7772/ultrafeedback_binarized_relabelled_ultrarm
[ "region:us" ]
2024-01-24T07:11:56+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "score_chosen", "dtype": "float64"}, {"name": "score_rejected", "dtype": "float64"}, {"name": "reward_chosen", "dtype": "float64"}, {"name": "reward_rejected", "dtype": "float64"}], "splits": [{"name": "train_prefs", "num_bytes": 405566392, "num_examples": 61135}, {"name": "test_prefs", "num_bytes": 13157585, "num_examples": 2000}], "download_size": 235095739, "dataset_size": 418723977}}
2024-01-24T07:12:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback_binarized_relabelled_ultrarm" More Information needed
[ "# Dataset Card for \"ultrafeedback_binarized_relabelled_ultrarm\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback_binarized_relabelled_ultrarm\"\n\nMore Information needed" ]
75306d087e7c2cb2d45947c4abd4d2aac7e70428
# Dataset of marciana/マルチャーナ/玛律恰那/마르차나 (Nikke: Goddess of Victory) This is the dataset of marciana/マルチャーナ/玛律恰那/마르차나 (Nikke: Goddess of Victory), containing 69 images and their tags. The core tags of this character are `long_hair, breasts, bangs, large_breasts, brown_hair, brown_eyes, black_hair, yellow_eyes, very_long_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 69 | 132.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/marciana_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 69 | 57.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/marciana_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 175 | 125.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/marciana_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 69 | 107.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/marciana_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 175 | 205.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/marciana_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/marciana_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 69 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, solo, looking_at_viewer, white_gloves, long_sleeves, white_pants, crop_top, midriff, uniform, navel_piercing, simple_background, epaulettes, white_background, ascot | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | looking_at_viewer | white_gloves | long_sleeves | white_pants | crop_top | midriff | uniform | navel_piercing | simple_background | epaulettes | white_background | ascot | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------------------|:---------------|:---------------|:--------------|:-----------|:----------|:----------|:-----------------|:--------------------|:-------------|:-------------------|:--------| | 0 | 69 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/marciana_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T07:24:52+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T07:41:19+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of marciana/マルチャーナ/玛律恰那/마르차나 (Nikke: Goddess of Victory) ================================================================ This is the dataset of marciana/マルチャーナ/玛律恰那/마르차나 (Nikke: Goddess of Victory), containing 69 images and their tags. The core tags of this character are 'long\_hair, breasts, bangs, large\_breasts, brown\_hair, brown\_eyes, black\_hair, yellow\_eyes, very\_long\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
8d1398d2d1bd56225a53df41b34796ef686787d2
# Dataset of poli/ポリ/波莉/폴리 (Nikke: Goddess of Victory) This is the dataset of poli/ポリ/波莉/폴리 (Nikke: Goddess of Victory), containing 38 images and their tags. The core tags of this character are `long_hair, blue_eyes, white_hair, hat, bangs, very_long_hair, fang, grey_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 38 | 55.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 38 | 29.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 89 | 63.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 38 | 47.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 89 | 95.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/poli_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/poli_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, black_gloves, holding_gun, smile, open_mouth, socks, looking_at_viewer, police_uniform, white_background, assault_rifle, bandaid_on_knee, full_body, shoes, simple_background | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_gloves, policewoman, looking_at_viewer, short_sleeves, smile, solo, sunglasses, holding, pencil_skirt, police_hat, black_headwear, blue_headwear, closed_mouth, cloud, hair_between_eyes, hand_on_hip, open_mouth, outdoors, walkie-talkie | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | black_gloves | holding_gun | smile | open_mouth | socks | looking_at_viewer | police_uniform | white_background | assault_rifle | bandaid_on_knee | full_body | shoes | simple_background | policewoman | short_sleeves | sunglasses | holding | pencil_skirt | police_hat | black_headwear | blue_headwear | closed_mouth | cloud | hair_between_eyes | hand_on_hip | outdoors | walkie-talkie | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:--------------|:--------|:-------------|:--------|:--------------------|:-----------------|:-------------------|:----------------|:------------------|:------------|:--------|:--------------------|:--------------|:----------------|:-------------|:----------|:---------------|:-------------|:-----------------|:----------------|:---------------|:--------|:--------------------|:--------------|:-----------|:----------------| | 0 | 13 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/poli_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T07:27:58+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T07:40:16+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of poli/ポリ/波莉/폴리 (Nikke: Goddess of Victory) ==================================================== This is the dataset of poli/ポリ/波莉/폴리 (Nikke: Goddess of Victory), containing 38 images and their tags. The core tags of this character are 'long\_hair, blue\_eyes, white\_hair, hat, bangs, very\_long\_hair, fang, grey\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
c2a994d390e4783235ce08c993a30308f347d389
# Dataset of ludmilla/ルドミラ/鲁德米拉/루드밀라 (Nikke: Goddess of Victory) This is the dataset of ludmilla/ルドミラ/鲁德米拉/루드밀라 (Nikke: Goddess of Victory), containing 71 images and their tags. The core tags of this character are `long_hair, breasts, blonde_hair, large_breasts, red_eyes, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 71 | 154.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ludmilla_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 71 | 71.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ludmilla_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 190 | 161.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ludmilla_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 71 | 126.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ludmilla_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 190 | 255.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ludmilla_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ludmilla_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_gloves, cleavage, looking_at_viewer, smile, solo, bare_shoulders, black_panties, black_thighhighs, collarbone, garter_straps, thighs, blush, choker, closed_mouth, fur-trimmed_coat, white_coat, butterfly, lying, standing, white_dress | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_gloves, cleavage, looking_at_viewer, solo, bare_shoulders, black_thighhighs, garter_straps, smile, standing, white_dress, choker, full_body, fur_trim, closed_mouth, high_heels, holding_lantern, off_shoulder, white_footwear | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | white_headwear, 1girl, fur_trim, solo, looking_at_viewer, thighhighs, cleavage, garter_straps, long_sleeves, white_dress, white_gloves, fur_collar, holding, parted_lips, smile, ushanka, white_jacket, standing, open_jacket, blush, papakha, short_dress, simple_background | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | blush, hetero, nipples, sex, 1boy, 1girl, open_mouth, penis, pussy, solo_focus, thighhighs, vaginal, mosaic_censoring, nude, spread_legs, sweat, navel, parted_bangs, breast_grab, cowgirl_position, girl_on_top, grabbing, missionary, on_back | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blush, looking_at_viewer, solo, onsen, water, completely_nude, smile, wet, cleavage, collarbone, navel, night, parted_bangs, thighs | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | cleavage | looking_at_viewer | smile | solo | bare_shoulders | black_panties | black_thighhighs | collarbone | garter_straps | thighs | blush | choker | closed_mouth | fur-trimmed_coat | white_coat | butterfly | lying | standing | white_dress | full_body | fur_trim | high_heels | holding_lantern | off_shoulder | white_footwear | white_headwear | thighhighs | long_sleeves | white_gloves | fur_collar | holding | parted_lips | ushanka | white_jacket | open_jacket | papakha | short_dress | simple_background | hetero | nipples | sex | 1boy | open_mouth | penis | pussy | solo_focus | vaginal | mosaic_censoring | nude | spread_legs | sweat | navel | parted_bangs | breast_grab | cowgirl_position | girl_on_top | grabbing | missionary | on_back | onsen | water | completely_nude | wet | night | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------|:--------------------|:--------|:-------|:-----------------|:----------------|:-------------------|:-------------|:----------------|:---------|:--------|:---------|:---------------|:-------------------|:-------------|:------------|:--------|:-----------|:--------------|:------------|:-----------|:-------------|:------------------|:---------------|:-----------------|:-----------------|:-------------|:---------------|:---------------|:-------------|:----------|:--------------|:----------|:---------------|:--------------|:----------|:--------------|:--------------------|:---------|:----------|:------|:-------|:-------------|:--------|:--------|:-------------|:----------|:-------------------|:-------|:--------------|:--------|:--------|:---------------|:--------------|:-------------------|:--------------|:-----------|:-------------|:----------|:--------|:--------|:------------------|:------|:--------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | X | | X | | | X | X | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | X | X | | | | | X | | X | | | | | | | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | X | X | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | X | X | X | X | X |
CyberHarem/ludmilla_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T07:31:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T07:52:13+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of ludmilla/ルドミラ/鲁德米拉/루드밀라 (Nikke: Goddess of Victory) ============================================================== This is the dataset of ludmilla/ルドミラ/鲁德米拉/루드밀라 (Nikke: Goddess of Victory), containing 71 images and their tags. The core tags of this character are 'long\_hair, breasts, blonde\_hair, large\_breasts, red\_eyes, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
46da7684452d9ed80ae975344718cea74f2cdcee
# Dataset Card for Evaluation run of freecs/ThetaWave-7B-v0.1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [freecs/ThetaWave-7B-v0.1](https://huggingface.co/freecs/ThetaWave-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_freecs__ThetaWave-7B-v0.1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T07:54:32.474467](https://huggingface.co/datasets/open-llm-leaderboard/details_freecs__ThetaWave-7B-v0.1/blob/main/results_2024-01-24T07-54-32.474467.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6369850549630417, "acc_stderr": 0.03254410038117921, "acc_norm": 0.6388746936562116, "acc_norm_stderr": 0.03320176501025405, "mc1": 0.4357405140758874, "mc1_stderr": 0.017358345398863124, "mc2": 0.6024313355556469, "mc2_stderr": 0.01530031296029918 }, "harness|arc:challenge|25": { "acc": 0.621160409556314, "acc_stderr": 0.014175915490000326, "acc_norm": 0.6629692832764505, "acc_norm_stderr": 0.013813476652902274 }, "harness|hellaswag|10": { "acc": 0.6620195180242979, "acc_stderr": 0.004720551323547126, "acc_norm": 0.8540131447918742, "acc_norm_stderr": 0.0035237141526513 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7039473684210527, "acc_stderr": 0.03715062154998904, "acc_norm": 0.7039473684210527, "acc_norm_stderr": 0.03715062154998904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.59, "acc_stderr": 0.04943110704237102, "acc_norm": 0.59, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7132075471698113, "acc_stderr": 0.02783491252754406, "acc_norm": 0.7132075471698113, "acc_norm_stderr": 0.02783491252754406 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.75, "acc_stderr": 0.03621034121889507, "acc_norm": 0.75, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6473988439306358, "acc_stderr": 0.036430371689585475, "acc_norm": 0.6473988439306358, "acc_norm_stderr": 0.036430371689585475 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.46078431372549017, "acc_stderr": 0.04959859966384181, "acc_norm": 0.46078431372549017, "acc_norm_stderr": 0.04959859966384181 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5574468085106383, "acc_stderr": 0.032469569197899575, "acc_norm": 0.5574468085106383, "acc_norm_stderr": 0.032469569197899575 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.046920083813689104, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.046920083813689104 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5793103448275863, "acc_stderr": 0.0411391498118926, "acc_norm": 0.5793103448275863, "acc_norm_stderr": 0.0411391498118926 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4126984126984127, "acc_stderr": 0.025355741263055266, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.025355741263055266 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7290322580645161, "acc_stderr": 0.025284416114900152, "acc_norm": 0.7290322580645161, "acc_norm_stderr": 0.025284416114900152 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4827586206896552, "acc_stderr": 0.035158955511657, "acc_norm": 0.4827586206896552, "acc_norm_stderr": 0.035158955511657 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.63, "acc_stderr": 0.04852365870939098, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939098 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7777777777777778, "acc_stderr": 0.02962022787479049, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.02962022787479049 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8704663212435233, "acc_stderr": 0.024233532297758723, "acc_norm": 0.8704663212435233, "acc_norm_stderr": 0.024233532297758723 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.617948717948718, "acc_stderr": 0.024635549163908234, "acc_norm": 0.617948717948718, "acc_norm_stderr": 0.024635549163908234 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.02931820364520686, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.02931820364520686 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6554621848739496, "acc_stderr": 0.03086868260412162, "acc_norm": 0.6554621848739496, "acc_norm_stderr": 0.03086868260412162 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.37748344370860926, "acc_stderr": 0.03958027231121569, "acc_norm": 0.37748344370860926, "acc_norm_stderr": 0.03958027231121569 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.818348623853211, "acc_stderr": 0.01653061740926685, "acc_norm": 0.818348623853211, "acc_norm_stderr": 0.01653061740926685 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5, "acc_stderr": 0.034099716973523674, "acc_norm": 0.5, "acc_norm_stderr": 0.034099716973523674 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7990196078431373, "acc_stderr": 0.028125972265654373, "acc_norm": 0.7990196078431373, "acc_norm_stderr": 0.028125972265654373 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7805907172995781, "acc_stderr": 0.026939106581553945, "acc_norm": 0.7805907172995781, "acc_norm_stderr": 0.026939106581553945 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6636771300448431, "acc_stderr": 0.031708824268455, "acc_norm": 0.6636771300448431, "acc_norm_stderr": 0.031708824268455 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7557251908396947, "acc_stderr": 0.03768335959728744, "acc_norm": 0.7557251908396947, "acc_norm_stderr": 0.03768335959728744 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8181818181818182, "acc_stderr": 0.03520893951097652, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.03520893951097652 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7592592592592593, "acc_stderr": 0.04133119440243839, "acc_norm": 0.7592592592592593, "acc_norm_stderr": 0.04133119440243839 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7361963190184049, "acc_stderr": 0.03462419931615624, "acc_norm": 0.7361963190184049, "acc_norm_stderr": 0.03462419931615624 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7281553398058253, "acc_stderr": 0.044052680241409216, "acc_norm": 0.7281553398058253, "acc_norm_stderr": 0.044052680241409216 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9017094017094017, "acc_stderr": 0.019503444900757567, "acc_norm": 0.9017094017094017, "acc_norm_stderr": 0.019503444900757567 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8250319284802043, "acc_stderr": 0.01358661921990333, "acc_norm": 0.8250319284802043, "acc_norm_stderr": 0.01358661921990333 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6791907514450867, "acc_stderr": 0.025131000233647897, "acc_norm": 0.6791907514450867, "acc_norm_stderr": 0.025131000233647897 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4134078212290503, "acc_stderr": 0.016469814928406164, "acc_norm": 0.4134078212290503, "acc_norm_stderr": 0.016469814928406164 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7124183006535948, "acc_stderr": 0.02591780611714716, "acc_norm": 0.7124183006535948, "acc_norm_stderr": 0.02591780611714716 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6913183279742765, "acc_stderr": 0.02623696588115327, "acc_norm": 0.6913183279742765, "acc_norm_stderr": 0.02623696588115327 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7098765432098766, "acc_stderr": 0.025251173936495033, "acc_norm": 0.7098765432098766, "acc_norm_stderr": 0.025251173936495033 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4716312056737589, "acc_stderr": 0.02977945095730307, "acc_norm": 0.4716312056737589, "acc_norm_stderr": 0.02977945095730307 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.44654498044328556, "acc_stderr": 0.012697046024399675, "acc_norm": 0.44654498044328556, "acc_norm_stderr": 0.012697046024399675 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6433823529411765, "acc_stderr": 0.029097209568411952, "acc_norm": 0.6433823529411765, "acc_norm_stderr": 0.029097209568411952 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6454248366013072, "acc_stderr": 0.019353360547553697, "acc_norm": 0.6454248366013072, "acc_norm_stderr": 0.019353360547553697 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7272727272727273, "acc_stderr": 0.04265792110940588, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.04265792110940588 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7183673469387755, "acc_stderr": 0.028795185574291296, "acc_norm": 0.7183673469387755, "acc_norm_stderr": 0.028795185574291296 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8009950248756219, "acc_stderr": 0.028231365092758406, "acc_norm": 0.8009950248756219, "acc_norm_stderr": 0.028231365092758406 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.03775251680686371, "acc_norm": 0.83, "acc_norm_stderr": 0.03775251680686371 }, "harness|hendrycksTest-virology|5": { "acc": 0.5240963855421686, "acc_stderr": 0.03887971849597264, "acc_norm": 0.5240963855421686, "acc_norm_stderr": 0.03887971849597264 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160882, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160882 }, "harness|truthfulqa:mc|0": { "mc1": 0.4357405140758874, "mc1_stderr": 0.017358345398863124, "mc2": 0.6024313355556469, "mc2_stderr": 0.01530031296029918 }, "harness|winogrande|5": { "acc": 0.8018942383583267, "acc_stderr": 0.011201862744487052 }, "harness|gsm8k|5": { "acc": 0.5943896891584534, "acc_stderr": 0.013524848894462115 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_freecs__ThetaWave-7B-v0.1
[ "region:us" ]
2024-01-24T07:56:52+00:00
{"pretty_name": "Evaluation run of freecs/ThetaWave-7B-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [freecs/ThetaWave-7B-v0.1](https://huggingface.co/freecs/ThetaWave-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_freecs__ThetaWave-7B-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T07:54:32.474467](https://huggingface.co/datasets/open-llm-leaderboard/details_freecs__ThetaWave-7B-v0.1/blob/main/results_2024-01-24T07-54-32.474467.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6369850549630417,\n \"acc_stderr\": 0.03254410038117921,\n \"acc_norm\": 0.6388746936562116,\n \"acc_norm_stderr\": 0.03320176501025405,\n \"mc1\": 0.4357405140758874,\n \"mc1_stderr\": 0.017358345398863124,\n \"mc2\": 0.6024313355556469,\n \"mc2_stderr\": 0.01530031296029918\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.621160409556314,\n \"acc_stderr\": 0.014175915490000326,\n \"acc_norm\": 0.6629692832764505,\n \"acc_norm_stderr\": 0.013813476652902274\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6620195180242979,\n \"acc_stderr\": 0.004720551323547126,\n \"acc_norm\": 0.8540131447918742,\n \"acc_norm_stderr\": 0.0035237141526513\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754406,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754406\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.036430371689585475,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.036430371689585475\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.04959859966384181,\n \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.04959859966384181\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.032469569197899575,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.032469569197899575\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055266,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055266\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7290322580645161,\n \"acc_stderr\": 0.025284416114900152,\n \"acc_norm\": 0.7290322580645161,\n \"acc_norm_stderr\": 0.025284416114900152\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.035158955511657,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.035158955511657\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939098,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939098\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.02962022787479049,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.02962022787479049\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758723,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758723\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.617948717948718,\n \"acc_stderr\": 0.024635549163908234,\n \"acc_norm\": 0.617948717948718,\n \"acc_norm_stderr\": 0.024635549163908234\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.02931820364520686,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.02931820364520686\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.03086868260412162,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.03086868260412162\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.818348623853211,\n \"acc_stderr\": 0.01653061740926685,\n \"acc_norm\": 0.818348623853211,\n \"acc_norm_stderr\": 0.01653061740926685\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.034099716973523674,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.034099716973523674\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.028125972265654373,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.028125972265654373\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6636771300448431,\n \"acc_stderr\": 0.031708824268455,\n \"acc_norm\": 0.6636771300448431,\n \"acc_norm_stderr\": 0.031708824268455\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.03768335959728744,\n \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.03768335959728744\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097652,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097652\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615624,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615624\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9017094017094017,\n \"acc_stderr\": 0.019503444900757567,\n \"acc_norm\": 0.9017094017094017,\n \"acc_norm_stderr\": 0.019503444900757567\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n \"acc_stderr\": 0.01358661921990333,\n \"acc_norm\": 0.8250319284802043,\n \"acc_norm_stderr\": 0.01358661921990333\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6791907514450867,\n \"acc_stderr\": 0.025131000233647897,\n \"acc_norm\": 0.6791907514450867,\n \"acc_norm_stderr\": 0.025131000233647897\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4134078212290503,\n \"acc_stderr\": 0.016469814928406164,\n \"acc_norm\": 0.4134078212290503,\n \"acc_norm_stderr\": 0.016469814928406164\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n \"acc_stderr\": 0.02623696588115327,\n \"acc_norm\": 0.6913183279742765,\n \"acc_norm_stderr\": 0.02623696588115327\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495033,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495033\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.02977945095730307,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.02977945095730307\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44654498044328556,\n \"acc_stderr\": 0.012697046024399675,\n \"acc_norm\": 0.44654498044328556,\n \"acc_norm_stderr\": 0.012697046024399675\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6433823529411765,\n \"acc_stderr\": 0.029097209568411952,\n \"acc_norm\": 0.6433823529411765,\n \"acc_norm_stderr\": 0.029097209568411952\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6454248366013072,\n \"acc_stderr\": 0.019353360547553697,\n \"acc_norm\": 0.6454248366013072,\n \"acc_norm_stderr\": 0.019353360547553697\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.04265792110940588,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04265792110940588\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291296,\n \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291296\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8009950248756219,\n \"acc_stderr\": 0.028231365092758406,\n \"acc_norm\": 0.8009950248756219,\n \"acc_norm_stderr\": 0.028231365092758406\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160882,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160882\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4357405140758874,\n \"mc1_stderr\": 0.017358345398863124,\n \"mc2\": 0.6024313355556469,\n \"mc2_stderr\": 0.01530031296029918\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8018942383583267,\n \"acc_stderr\": 0.011201862744487052\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5943896891584534,\n \"acc_stderr\": 0.013524848894462115\n }\n}\n```", "repo_url": "https://huggingface.co/freecs/ThetaWave-7B-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|arc:challenge|25_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|gsm8k|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hellaswag|10_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T07-54-32.474467.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["**/details_harness|winogrande|5_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T07-54-32.474467.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T07_54_32.474467", "path": ["results_2024-01-24T07-54-32.474467.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T07-54-32.474467.parquet"]}]}]}
2024-01-24T07:57:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of freecs/ThetaWave-7B-v0.1 Dataset automatically created during the evaluation run of model freecs/ThetaWave-7B-v0.1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T07:54:32.474467(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of freecs/ThetaWave-7B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model freecs/ThetaWave-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T07:54:32.474467(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of freecs/ThetaWave-7B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model freecs/ThetaWave-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T07:54:32.474467(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
12ace059393948d5380a600b2bb1dd963e60f2e2
# Dataset Card for Evaluation run of abacusai/MM-OV-bagel-DPO-34b-c1000-250 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [abacusai/MM-OV-bagel-DPO-34b-c1000-250](https://huggingface.co/abacusai/MM-OV-bagel-DPO-34b-c1000-250) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_abacusai__MM-OV-bagel-DPO-34b-c1000-250", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T07:59:43.945933](https://huggingface.co/datasets/open-llm-leaderboard/details_abacusai__MM-OV-bagel-DPO-34b-c1000-250/blob/main/results_2024-01-24T07-59-43.945933.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.7597155366563035, "acc_stderr": 0.02837032363320797, "acc_norm": 0.7632345413090461, "acc_norm_stderr": 0.02891633054739416, "mc1": 0.4810281517747858, "mc1_stderr": 0.01749089640576235, "mc2": 0.6367417890283518, "mc2_stderr": 0.01475171297078638 }, "harness|arc:challenge|25": { "acc": 0.6459044368600683, "acc_stderr": 0.01397545412275656, "acc_norm": 0.681740614334471, "acc_norm_stderr": 0.013611993916971451 }, "harness|hellaswag|10": { "acc": 0.6408086038637721, "acc_stderr": 0.004787829168255652, "acc_norm": 0.8396733718382793, "acc_norm_stderr": 0.0036615885079775523 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.7481481481481481, "acc_stderr": 0.03749850709174021, "acc_norm": 0.7481481481481481, "acc_norm_stderr": 0.03749850709174021 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.8881578947368421, "acc_stderr": 0.02564834125169361, "acc_norm": 0.8881578947368421, "acc_norm_stderr": 0.02564834125169361 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.77, "acc_stderr": 0.04229525846816505, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.8075471698113208, "acc_stderr": 0.024262979839372274, "acc_norm": 0.8075471698113208, "acc_norm_stderr": 0.024262979839372274 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.9097222222222222, "acc_stderr": 0.023964965777906935, "acc_norm": 0.9097222222222222, "acc_norm_stderr": 0.023964965777906935 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.39, "acc_stderr": 0.04902071300001974, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.7398843930635838, "acc_stderr": 0.03345036916788991, "acc_norm": 0.7398843930635838, "acc_norm_stderr": 0.03345036916788991 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.5392156862745098, "acc_stderr": 0.04959859966384181, "acc_norm": 0.5392156862745098, "acc_norm_stderr": 0.04959859966384181 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.7829787234042553, "acc_stderr": 0.02694748312149625, "acc_norm": 0.7829787234042553, "acc_norm_stderr": 0.02694748312149625 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5964912280701754, "acc_stderr": 0.04615186962583707, "acc_norm": 0.5964912280701754, "acc_norm_stderr": 0.04615186962583707 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.7517241379310344, "acc_stderr": 0.036001056927277696, "acc_norm": 0.7517241379310344, "acc_norm_stderr": 0.036001056927277696 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.7407407407407407, "acc_stderr": 0.02256989707491842, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.02256989707491842 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5317460317460317, "acc_stderr": 0.04463112720677173, "acc_norm": 0.5317460317460317, "acc_norm_stderr": 0.04463112720677173 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.9, "acc_stderr": 0.01706640371965727, "acc_norm": 0.9, "acc_norm_stderr": 0.01706640371965727 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.6798029556650246, "acc_stderr": 0.03282649385304151, "acc_norm": 0.6798029556650246, "acc_norm_stderr": 0.03282649385304151 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.77, "acc_stderr": 0.042295258468165044, "acc_norm": 0.77, "acc_norm_stderr": 0.042295258468165044 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8484848484848485, "acc_stderr": 0.027998073798781664, "acc_norm": 0.8484848484848485, "acc_norm_stderr": 0.027998073798781664 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.9444444444444444, "acc_stderr": 0.016319950700767374, "acc_norm": 0.9444444444444444, "acc_norm_stderr": 0.016319950700767374 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9637305699481865, "acc_stderr": 0.013492659751295131, "acc_norm": 0.9637305699481865, "acc_norm_stderr": 0.013492659751295131 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.8102564102564103, "acc_stderr": 0.0198801654065888, "acc_norm": 0.8102564102564103, "acc_norm_stderr": 0.0198801654065888 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.45555555555555555, "acc_stderr": 0.03036486250482443, "acc_norm": 0.45555555555555555, "acc_norm_stderr": 0.03036486250482443 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.865546218487395, "acc_stderr": 0.02215937307274444, "acc_norm": 0.865546218487395, "acc_norm_stderr": 0.02215937307274444 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.5099337748344371, "acc_stderr": 0.04081677107248437, "acc_norm": 0.5099337748344371, "acc_norm_stderr": 0.04081677107248437 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.9137614678899083, "acc_stderr": 0.012035597300116245, "acc_norm": 0.9137614678899083, "acc_norm_stderr": 0.012035597300116245 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6481481481481481, "acc_stderr": 0.03256850570293647, "acc_norm": 0.6481481481481481, "acc_norm_stderr": 0.03256850570293647 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9068627450980392, "acc_stderr": 0.020397853969427, "acc_norm": 0.9068627450980392, "acc_norm_stderr": 0.020397853969427 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.9071729957805907, "acc_stderr": 0.01888975055095671, "acc_norm": 0.9071729957805907, "acc_norm_stderr": 0.01888975055095671 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.8116591928251121, "acc_stderr": 0.026241132996407252, "acc_norm": 0.8116591928251121, "acc_norm_stderr": 0.026241132996407252 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8549618320610687, "acc_stderr": 0.030884661089515375, "acc_norm": 0.8549618320610687, "acc_norm_stderr": 0.030884661089515375 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8842975206611571, "acc_stderr": 0.029199802455622804, "acc_norm": 0.8842975206611571, "acc_norm_stderr": 0.029199802455622804 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8888888888888888, "acc_stderr": 0.030381596756651655, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.030381596756651655 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.852760736196319, "acc_stderr": 0.027839915278339653, "acc_norm": 0.852760736196319, "acc_norm_stderr": 0.027839915278339653 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5178571428571429, "acc_stderr": 0.047427623612430116, "acc_norm": 0.5178571428571429, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.8640776699029126, "acc_stderr": 0.0339329572976101, "acc_norm": 0.8640776699029126, "acc_norm_stderr": 0.0339329572976101 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9401709401709402, "acc_stderr": 0.015537514263253858, "acc_norm": 0.9401709401709402, "acc_norm_stderr": 0.015537514263253858 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.89, "acc_stderr": 0.03144660377352202, "acc_norm": 0.89, "acc_norm_stderr": 0.03144660377352202 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.9106002554278416, "acc_stderr": 0.010203017847688312, "acc_norm": 0.9106002554278416, "acc_norm_stderr": 0.010203017847688312 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.815028901734104, "acc_stderr": 0.02090397584208303, "acc_norm": 0.815028901734104, "acc_norm_stderr": 0.02090397584208303 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.8100558659217877, "acc_stderr": 0.013119028310492683, "acc_norm": 0.8100558659217877, "acc_norm_stderr": 0.013119028310492683 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.8431372549019608, "acc_stderr": 0.020823758837580912, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.020823758837580912 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.797427652733119, "acc_stderr": 0.022827317491059686, "acc_norm": 0.797427652733119, "acc_norm_stderr": 0.022827317491059686 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8641975308641975, "acc_stderr": 0.019061588181505388, "acc_norm": 0.8641975308641975, "acc_norm_stderr": 0.019061588181505388 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.6347517730496454, "acc_stderr": 0.02872386385328127, "acc_norm": 0.6347517730496454, "acc_norm_stderr": 0.02872386385328127 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.590612777053455, "acc_stderr": 0.012558780895570755, "acc_norm": 0.590612777053455, "acc_norm_stderr": 0.012558780895570755 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.8235294117647058, "acc_stderr": 0.023157468308559342, "acc_norm": 0.8235294117647058, "acc_norm_stderr": 0.023157468308559342 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.815359477124183, "acc_stderr": 0.01569702924075778, "acc_norm": 0.815359477124183, "acc_norm_stderr": 0.01569702924075778 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8448979591836735, "acc_stderr": 0.0231747988612186, "acc_norm": 0.8448979591836735, "acc_norm_stderr": 0.0231747988612186 }, "harness|hendrycksTest-sociology|5": { "acc": 0.900497512437811, "acc_stderr": 0.021166216304659407, "acc_norm": 0.900497512437811, "acc_norm_stderr": 0.021166216304659407 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.02876234912646613, "acc_norm": 0.91, "acc_norm_stderr": 0.02876234912646613 }, "harness|hendrycksTest-virology|5": { "acc": 0.5783132530120482, "acc_stderr": 0.03844453181770917, "acc_norm": 0.5783132530120482, "acc_norm_stderr": 0.03844453181770917 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8771929824561403, "acc_stderr": 0.02517298435015577, "acc_norm": 0.8771929824561403, "acc_norm_stderr": 0.02517298435015577 }, "harness|truthfulqa:mc|0": { "mc1": 0.4810281517747858, "mc1_stderr": 0.01749089640576235, "mc2": 0.6367417890283518, "mc2_stderr": 0.01475171297078638 }, "harness|winogrande|5": { "acc": 0.823993685872139, "acc_stderr": 0.010703090882320705 }, "harness|gsm8k|5": { "acc": 0.7225170583775588, "acc_stderr": 0.012333447581047539 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_abacusai__MM-OV-bagel-DPO-34b-c1000-250
[ "region:us" ]
2024-01-24T07:58:17+00:00
{"pretty_name": "Evaluation run of abacusai/MM-OV-bagel-DPO-34b-c1000-250", "dataset_summary": "Dataset automatically created during the evaluation run of model [abacusai/MM-OV-bagel-DPO-34b-c1000-250](https://huggingface.co/abacusai/MM-OV-bagel-DPO-34b-c1000-250) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_abacusai__MM-OV-bagel-DPO-34b-c1000-250\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T07:59:43.945933](https://huggingface.co/datasets/open-llm-leaderboard/details_abacusai__MM-OV-bagel-DPO-34b-c1000-250/blob/main/results_2024-01-24T07-59-43.945933.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7597155366563035,\n \"acc_stderr\": 0.02837032363320797,\n \"acc_norm\": 0.7632345413090461,\n \"acc_norm_stderr\": 0.02891633054739416,\n \"mc1\": 0.4810281517747858,\n \"mc1_stderr\": 0.01749089640576235,\n \"mc2\": 0.6367417890283518,\n \"mc2_stderr\": 0.01475171297078638\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6459044368600683,\n \"acc_stderr\": 0.01397545412275656,\n \"acc_norm\": 0.681740614334471,\n \"acc_norm_stderr\": 0.013611993916971451\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6408086038637721,\n \"acc_stderr\": 0.004787829168255652,\n \"acc_norm\": 0.8396733718382793,\n \"acc_norm_stderr\": 0.0036615885079775523\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7481481481481481,\n \"acc_stderr\": 0.03749850709174021,\n \"acc_norm\": 0.7481481481481481,\n \"acc_norm_stderr\": 0.03749850709174021\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8881578947368421,\n \"acc_stderr\": 0.02564834125169361,\n \"acc_norm\": 0.8881578947368421,\n \"acc_norm_stderr\": 0.02564834125169361\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8075471698113208,\n \"acc_stderr\": 0.024262979839372274,\n \"acc_norm\": 0.8075471698113208,\n \"acc_norm_stderr\": 0.024262979839372274\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9097222222222222,\n \"acc_stderr\": 0.023964965777906935,\n \"acc_norm\": 0.9097222222222222,\n \"acc_norm_stderr\": 0.023964965777906935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.03345036916788991,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.03345036916788991\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7829787234042553,\n \"acc_stderr\": 0.02694748312149625,\n \"acc_norm\": 0.7829787234042553,\n \"acc_norm_stderr\": 0.02694748312149625\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5964912280701754,\n \"acc_stderr\": 0.04615186962583707,\n \"acc_norm\": 0.5964912280701754,\n \"acc_norm_stderr\": 0.04615186962583707\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7517241379310344,\n \"acc_stderr\": 0.036001056927277696,\n \"acc_norm\": 0.7517241379310344,\n \"acc_norm_stderr\": 0.036001056927277696\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.02256989707491842,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.02256989707491842\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5317460317460317,\n \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.5317460317460317,\n \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.01706640371965727,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.01706640371965727\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6798029556650246,\n \"acc_stderr\": 0.03282649385304151,\n \"acc_norm\": 0.6798029556650246,\n \"acc_norm_stderr\": 0.03282649385304151\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8484848484848485,\n \"acc_stderr\": 0.027998073798781664,\n \"acc_norm\": 0.8484848484848485,\n \"acc_norm_stderr\": 0.027998073798781664\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9444444444444444,\n \"acc_stderr\": 0.016319950700767374,\n \"acc_norm\": 0.9444444444444444,\n \"acc_norm_stderr\": 0.016319950700767374\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9637305699481865,\n \"acc_stderr\": 0.013492659751295131,\n \"acc_norm\": 0.9637305699481865,\n \"acc_norm_stderr\": 0.013492659751295131\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8102564102564103,\n \"acc_stderr\": 0.0198801654065888,\n \"acc_norm\": 0.8102564102564103,\n \"acc_norm_stderr\": 0.0198801654065888\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.45555555555555555,\n \"acc_stderr\": 0.03036486250482443,\n \"acc_norm\": 0.45555555555555555,\n \"acc_norm_stderr\": 0.03036486250482443\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.865546218487395,\n \"acc_stderr\": 0.02215937307274444,\n \"acc_norm\": 0.865546218487395,\n \"acc_norm_stderr\": 0.02215937307274444\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5099337748344371,\n \"acc_stderr\": 0.04081677107248437,\n \"acc_norm\": 0.5099337748344371,\n \"acc_norm_stderr\": 0.04081677107248437\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9137614678899083,\n \"acc_stderr\": 0.012035597300116245,\n \"acc_norm\": 0.9137614678899083,\n \"acc_norm_stderr\": 0.012035597300116245\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6481481481481481,\n \"acc_stderr\": 0.03256850570293647,\n \"acc_norm\": 0.6481481481481481,\n \"acc_norm_stderr\": 0.03256850570293647\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9068627450980392,\n \"acc_stderr\": 0.020397853969427,\n \"acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969427\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9071729957805907,\n \"acc_stderr\": 0.01888975055095671,\n \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.01888975055095671\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8116591928251121,\n \"acc_stderr\": 0.026241132996407252,\n \"acc_norm\": 0.8116591928251121,\n \"acc_norm_stderr\": 0.026241132996407252\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515375,\n \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515375\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8842975206611571,\n \"acc_stderr\": 0.029199802455622804,\n \"acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.029199802455622804\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.030381596756651655,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.030381596756651655\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.852760736196319,\n \"acc_stderr\": 0.027839915278339653,\n \"acc_norm\": 0.852760736196319,\n \"acc_norm_stderr\": 0.027839915278339653\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.0339329572976101,\n \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.0339329572976101\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n \"acc_stderr\": 0.015537514263253858,\n \"acc_norm\": 0.9401709401709402,\n \"acc_norm_stderr\": 0.015537514263253858\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352202,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352202\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9106002554278416,\n \"acc_stderr\": 0.010203017847688312,\n \"acc_norm\": 0.9106002554278416,\n \"acc_norm_stderr\": 0.010203017847688312\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.815028901734104,\n \"acc_stderr\": 0.02090397584208303,\n \"acc_norm\": 0.815028901734104,\n \"acc_norm_stderr\": 0.02090397584208303\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.8100558659217877,\n \"acc_stderr\": 0.013119028310492683,\n \"acc_norm\": 0.8100558659217877,\n \"acc_norm_stderr\": 0.013119028310492683\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.020823758837580912,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.020823758837580912\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.797427652733119,\n \"acc_stderr\": 0.022827317491059686,\n \"acc_norm\": 0.797427652733119,\n \"acc_norm_stderr\": 0.022827317491059686\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8641975308641975,\n \"acc_stderr\": 0.019061588181505388,\n \"acc_norm\": 0.8641975308641975,\n \"acc_norm_stderr\": 0.019061588181505388\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6347517730496454,\n \"acc_stderr\": 0.02872386385328127,\n \"acc_norm\": 0.6347517730496454,\n \"acc_norm_stderr\": 0.02872386385328127\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.590612777053455,\n \"acc_stderr\": 0.012558780895570755,\n \"acc_norm\": 0.590612777053455,\n \"acc_norm_stderr\": 0.012558780895570755\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.023157468308559342,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.023157468308559342\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.815359477124183,\n \"acc_stderr\": 0.01569702924075778,\n \"acc_norm\": 0.815359477124183,\n \"acc_norm_stderr\": 0.01569702924075778\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8448979591836735,\n \"acc_stderr\": 0.0231747988612186,\n \"acc_norm\": 0.8448979591836735,\n \"acc_norm_stderr\": 0.0231747988612186\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.900497512437811,\n \"acc_stderr\": 0.021166216304659407,\n \"acc_norm\": 0.900497512437811,\n \"acc_norm_stderr\": 0.021166216304659407\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.02876234912646613,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.02876234912646613\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.5783132530120482,\n \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015577,\n \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015577\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4810281517747858,\n \"mc1_stderr\": 0.01749089640576235,\n \"mc2\": 0.6367417890283518,\n \"mc2_stderr\": 0.01475171297078638\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.823993685872139,\n \"acc_stderr\": 0.010703090882320705\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7225170583775588,\n \"acc_stderr\": 0.012333447581047539\n }\n}\n```", "repo_url": "https://huggingface.co/abacusai/MM-OV-bagel-DPO-34b-c1000-250", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|arc:challenge|25_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|arc:challenge|25_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|gsm8k|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|gsm8k|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hellaswag|10_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hellaswag|10_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T07-56-05.449917.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T07-59-43.945933.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["**/details_harness|winogrande|5_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["**/details_harness|winogrande|5_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T07-59-43.945933.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T07_56_05.449917", "path": ["results_2024-01-24T07-56-05.449917.parquet"]}, {"split": "2024_01_24T07_59_43.945933", "path": ["results_2024-01-24T07-59-43.945933.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T07-59-43.945933.parquet"]}]}]}
2024-01-24T08:02:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of abacusai/MM-OV-bagel-DPO-34b-c1000-250 Dataset automatically created during the evaluation run of model abacusai/MM-OV-bagel-DPO-34b-c1000-250 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T07:59:43.945933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of abacusai/MM-OV-bagel-DPO-34b-c1000-250\n\n\n\nDataset automatically created during the evaluation run of model abacusai/MM-OV-bagel-DPO-34b-c1000-250 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T07:59:43.945933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of abacusai/MM-OV-bagel-DPO-34b-c1000-250\n\n\n\nDataset automatically created during the evaluation run of model abacusai/MM-OV-bagel-DPO-34b-c1000-250 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T07:59:43.945933(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
f8b3d8529cf51ebe4d3922a226e4b2950a3093e8
- **Homepage:** https://www.darrow.ai/ - **Repository:** https://github.com/darrow-labs/LegalLens - **Paper:** https://arxiv.org/pdf/2402.04335.pdf - **Point of Contact:** [Dor Bernsohn](mailto:[email protected]),[Gil Semo](mailto:[email protected]) ## Overview The LegalLensNLI dataset is a unique collection of entries designed to show the connection between legal cases and the people affected by them. It's specially made for machine learning tools that aim to investigate more in the area of legal violations, specifically class action complaints. The main goal is to find people who have been harmed by certain legal cases and to help them get their compensation claims processed. Each row in the dataset contains three key elements: - **Premise**: This is a concise summary of an actual class action case, carefully summarized to highlight the core legal issue at hand. - **Hypothesis**: An artificially generated text resembling a complaint or commentary as one might find on social media platforms like Reddit, Twitter, or various blog posts. This text is designed to reflect individual expressions or reports related to the summarized case. - **Label**: The relationship between the premise and the hypothesis. ## Structure The repository is structured to facilitate ease of access and utility: - `LegalLensNLI.csv`: The primary dataset file that includes all the legal domain data. - `mnli-by-legal-act`: This directory further categorizes the data into specific legal domains and contains separate `train`, `test`, and `validation` files for each domain to support machine learning tasks. ## Data Fields - **premise**: (str) The summarized background information or context extracted from legal documents, providing the setting or facts upon which the legal reasoning is based. - **hypothesis**: (str) A statement derived from the premise that represents a possible scenario or assertion that is to be evaluated for its truthfulness within the context of the given premise. - **legal_act**: (str) The specific legal act or statute that is relevant to the premise and hypothesis, indicating the area of law in question. - **label**: (int) The classification label assigned to the relationship between the premise and the hypothesis, which typically indicates whether the hypothesis is entailed, contradicted, or neutral based on the premise within the legal context. ## Curation Rationale The dataset was curated by Darrow.ai (2023). ## Data Instances Here is how you can load the dataset: ```python from datasets import load_dataset dataset = load_dataset("darrow-ai/LegalLensNLI") ``` ### Citation Information *TBD *LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text* *Proceedings of the 2024 European Chapter of the Association for Computational Linguistics. Malta. 2024* ``` @InProceedings TBD ```
darrow-ai/LegalLensNLI
[ "task_categories:zero-shot-classification", "task_categories:text-classification", "size_categories:n<1K", "language:en", "license:apache-2.0", "legal", "legalnlp", "class action", "arxiv:2402.04335", "region:us" ]
2024-01-24T08:12:02+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["zero-shot-classification", "text-classification"], "pretty_name": "JusticeLens", "tags": ["legal", "legalnlp", "class action"]}
2024-02-15T07:12:28+00:00
[ "2402.04335" ]
[ "en" ]
TAGS #task_categories-zero-shot-classification #task_categories-text-classification #size_categories-n<1K #language-English #license-apache-2.0 #legal #legalnlp #class action #arxiv-2402.04335 #region-us
- Homepage: URL - Repository: URL - Paper: URL - Point of Contact: Dor Bernsohn,Gil Semo ## Overview The LegalLensNLI dataset is a unique collection of entries designed to show the connection between legal cases and the people affected by them. It's specially made for machine learning tools that aim to investigate more in the area of legal violations, specifically class action complaints. The main goal is to find people who have been harmed by certain legal cases and to help them get their compensation claims processed. Each row in the dataset contains three key elements: - Premise: This is a concise summary of an actual class action case, carefully summarized to highlight the core legal issue at hand. - Hypothesis: An artificially generated text resembling a complaint or commentary as one might find on social media platforms like Reddit, Twitter, or various blog posts. This text is designed to reflect individual expressions or reports related to the summarized case. - Label: The relationship between the premise and the hypothesis. ## Structure The repository is structured to facilitate ease of access and utility: - 'URL': The primary dataset file that includes all the legal domain data. - 'mnli-by-legal-act': This directory further categorizes the data into specific legal domains and contains separate 'train', 'test', and 'validation' files for each domain to support machine learning tasks. ## Data Fields - premise: (str) The summarized background information or context extracted from legal documents, providing the setting or facts upon which the legal reasoning is based. - hypothesis: (str) A statement derived from the premise that represents a possible scenario or assertion that is to be evaluated for its truthfulness within the context of the given premise. - legal_act: (str) The specific legal act or statute that is relevant to the premise and hypothesis, indicating the area of law in question. - label: (int) The classification label assigned to the relationship between the premise and the hypothesis, which typically indicates whether the hypothesis is entailed, contradicted, or neutral based on the premise within the legal context. ## Curation Rationale The dataset was curated by URL (2023). ## Data Instances Here is how you can load the dataset: *TBD *LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text* *Proceedings of the 2024 European Chapter of the Association for Computational Linguistics. Malta. 2024*
[ "## Overview\n\nThe LegalLensNLI dataset is a unique collection of entries designed to show the connection between legal cases and the people affected by them. It's specially made for machine learning tools that aim to investigate more in the area of legal violations, specifically class action complaints. The main goal is to find people who have been harmed by certain legal cases and to help them get their compensation claims processed.\n\nEach row in the dataset contains three key elements:\n\n- Premise: This is a concise summary of an actual class action case, carefully summarized to highlight the core legal issue at hand.\n- Hypothesis: An artificially generated text resembling a complaint or commentary as one might find on social media platforms like Reddit, Twitter, or various blog posts. This text is designed to reflect individual expressions or reports related to the summarized case.\n- Label: The relationship between the premise and the hypothesis.", "## Structure\n\nThe repository is structured to facilitate ease of access and utility:\n\n- 'URL': The primary dataset file that includes all the legal domain data.\n- 'mnli-by-legal-act': This directory further categorizes the data into specific legal domains and contains separate 'train', 'test', and 'validation' files for each domain to support machine learning tasks.", "## Data Fields\n\n- premise: (str) The summarized background information or context extracted from legal documents, providing the setting or facts upon which the legal reasoning is based.\n- hypothesis: (str) A statement derived from the premise that represents a possible scenario or assertion that is to be evaluated for its truthfulness within the context of the given premise.\n- legal_act: (str) The specific legal act or statute that is relevant to the premise and hypothesis, indicating the area of law in question.\n- label: (int) The classification label assigned to the relationship between the premise and the hypothesis, which typically indicates whether the hypothesis is entailed, contradicted, or neutral based on the premise within the legal context.", "## Curation Rationale\n\nThe dataset was curated by URL (2023).", "## Data Instances\n\nHere is how you can load the dataset:\n\n\n\n\n\n\n\n*TBD \n*LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text*\n*Proceedings of the 2024 European Chapter of the Association for Computational Linguistics. Malta. 2024*" ]
[ "TAGS\n#task_categories-zero-shot-classification #task_categories-text-classification #size_categories-n<1K #language-English #license-apache-2.0 #legal #legalnlp #class action #arxiv-2402.04335 #region-us \n", "## Overview\n\nThe LegalLensNLI dataset is a unique collection of entries designed to show the connection between legal cases and the people affected by them. It's specially made for machine learning tools that aim to investigate more in the area of legal violations, specifically class action complaints. The main goal is to find people who have been harmed by certain legal cases and to help them get their compensation claims processed.\n\nEach row in the dataset contains three key elements:\n\n- Premise: This is a concise summary of an actual class action case, carefully summarized to highlight the core legal issue at hand.\n- Hypothesis: An artificially generated text resembling a complaint or commentary as one might find on social media platforms like Reddit, Twitter, or various blog posts. This text is designed to reflect individual expressions or reports related to the summarized case.\n- Label: The relationship between the premise and the hypothesis.", "## Structure\n\nThe repository is structured to facilitate ease of access and utility:\n\n- 'URL': The primary dataset file that includes all the legal domain data.\n- 'mnli-by-legal-act': This directory further categorizes the data into specific legal domains and contains separate 'train', 'test', and 'validation' files for each domain to support machine learning tasks.", "## Data Fields\n\n- premise: (str) The summarized background information or context extracted from legal documents, providing the setting or facts upon which the legal reasoning is based.\n- hypothesis: (str) A statement derived from the premise that represents a possible scenario or assertion that is to be evaluated for its truthfulness within the context of the given premise.\n- legal_act: (str) The specific legal act or statute that is relevant to the premise and hypothesis, indicating the area of law in question.\n- label: (int) The classification label assigned to the relationship between the premise and the hypothesis, which typically indicates whether the hypothesis is entailed, contradicted, or neutral based on the premise within the legal context.", "## Curation Rationale\n\nThe dataset was curated by URL (2023).", "## Data Instances\n\nHere is how you can load the dataset:\n\n\n\n\n\n\n\n*TBD \n*LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text*\n*Proceedings of the 2024 European Chapter of the Association for Computational Linguistics. Malta. 2024*" ]
e278d2df7402723c0b46786cbacfecd26894e24f
# Dataset Card for Evaluation run of mlabonne/Darewin-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [mlabonne/Darewin-7B](https://huggingface.co/mlabonne/Darewin-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_mlabonne__Darewin-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T08:13:57.054758](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Darewin-7B/blob/main/results_2024-01-24T08-13-57.054758.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6560164129347533, "acc_stderr": 0.031962689434190776, "acc_norm": 0.6562048573975326, "acc_norm_stderr": 0.03261805811687008, "mc1": 0.4394124847001224, "mc1_stderr": 0.01737452048251371, "mc2": 0.6038301238600606, "mc2_stderr": 0.015402053910396342 }, "harness|arc:challenge|25": { "acc": 0.6621160409556314, "acc_stderr": 0.013822047922283516, "acc_norm": 0.6860068259385665, "acc_norm_stderr": 0.013562691224726304 }, "harness|hellaswag|10": { "acc": 0.6786496713802032, "acc_stderr": 0.004660405565338756, "acc_norm": 0.8621788488348935, "acc_norm_stderr": 0.003440076775300576 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6222222222222222, "acc_stderr": 0.04188307537595852, "acc_norm": 0.6222222222222222, "acc_norm_stderr": 0.04188307537595852 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6907894736842105, "acc_stderr": 0.037610708698674805, "acc_norm": 0.6907894736842105, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7056603773584905, "acc_stderr": 0.02804918631569525, "acc_norm": 0.7056603773584905, "acc_norm_stderr": 0.02804918631569525 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7916666666666666, "acc_stderr": 0.033961162058453336, "acc_norm": 0.7916666666666666, "acc_norm_stderr": 0.033961162058453336 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.04292346959909283, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5829787234042553, "acc_stderr": 0.03223276266711712, "acc_norm": 0.5829787234042553, "acc_norm_stderr": 0.03223276266711712 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555498, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555498 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4126984126984127, "acc_stderr": 0.02535574126305526, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.02535574126305526 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4365079365079365, "acc_stderr": 0.04435932892851466, "acc_norm": 0.4365079365079365, "acc_norm_stderr": 0.04435932892851466 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.36, "acc_stderr": 0.048241815132442176, "acc_norm": 0.36, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083522, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083522 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4876847290640394, "acc_stderr": 0.035169204442208966, "acc_norm": 0.4876847290640394, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.02937661648494563, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.02937661648494563 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.02199531196364424, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.02199531196364424 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6846153846153846, "acc_stderr": 0.023559646983189946, "acc_norm": 0.6846153846153846, "acc_norm_stderr": 0.023559646983189946 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35185185185185186, "acc_stderr": 0.02911661760608301, "acc_norm": 0.35185185185185186, "acc_norm_stderr": 0.02911661760608301 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6932773109243697, "acc_stderr": 0.02995382389188704, "acc_norm": 0.6932773109243697, "acc_norm_stderr": 0.02995382389188704 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8568807339449541, "acc_stderr": 0.015014462497168589, "acc_norm": 0.8568807339449541, "acc_norm_stderr": 0.015014462497168589 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5416666666666666, "acc_stderr": 0.03398110890294636, "acc_norm": 0.5416666666666666, "acc_norm_stderr": 0.03398110890294636 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8284313725490197, "acc_stderr": 0.02646056956124064, "acc_norm": 0.8284313725490197, "acc_norm_stderr": 0.02646056956124064 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.02553010046023349, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.02553010046023349 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.816793893129771, "acc_stderr": 0.03392770926494733, "acc_norm": 0.816793893129771, "acc_norm_stderr": 0.03392770926494733 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990946, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990946 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615769, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615769 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.039891398595317706, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.039891398595317706 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8717948717948718, "acc_stderr": 0.02190190511507333, "acc_norm": 0.8717948717948718, "acc_norm_stderr": 0.02190190511507333 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.0446196043338474, "acc_norm": 0.73, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8378033205619413, "acc_stderr": 0.013182222616720887, "acc_norm": 0.8378033205619413, "acc_norm_stderr": 0.013182222616720887 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7485549132947977, "acc_stderr": 0.023357365785874037, "acc_norm": 0.7485549132947977, "acc_norm_stderr": 0.023357365785874037 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.37318435754189944, "acc_stderr": 0.016175692013381954, "acc_norm": 0.37318435754189944, "acc_norm_stderr": 0.016175692013381954 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7320261437908496, "acc_stderr": 0.025360603796242553, "acc_norm": 0.7320261437908496, "acc_norm_stderr": 0.025360603796242553 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7041800643086816, "acc_stderr": 0.025922371788818763, "acc_norm": 0.7041800643086816, "acc_norm_stderr": 0.025922371788818763 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7561728395061729, "acc_stderr": 0.02389187954195961, "acc_norm": 0.7561728395061729, "acc_norm_stderr": 0.02389187954195961 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.47196870925684486, "acc_stderr": 0.012750151802922436, "acc_norm": 0.47196870925684486, "acc_norm_stderr": 0.012750151802922436 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7169117647058824, "acc_stderr": 0.02736586113151381, "acc_norm": 0.7169117647058824, "acc_norm_stderr": 0.02736586113151381 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6781045751633987, "acc_stderr": 0.01890101532209309, "acc_norm": 0.6781045751633987, "acc_norm_stderr": 0.01890101532209309 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7224489795918367, "acc_stderr": 0.028666857790274648, "acc_norm": 0.7224489795918367, "acc_norm_stderr": 0.028666857790274648 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.02553843336857833, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.02553843336857833 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.5662650602409639, "acc_stderr": 0.03858158940685516, "acc_norm": 0.5662650602409639, "acc_norm_stderr": 0.03858158940685516 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.029913127232368036, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.029913127232368036 }, "harness|truthfulqa:mc|0": { "mc1": 0.4394124847001224, "mc1_stderr": 0.01737452048251371, "mc2": 0.6038301238600606, "mc2_stderr": 0.015402053910396342 }, "harness|winogrande|5": { "acc": 0.797947908445146, "acc_stderr": 0.011285013754047443 }, "harness|gsm8k|5": { "acc": 0.7103866565579985, "acc_stderr": 0.012493927348659629 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_mlabonne__Darewin-7B
[ "region:us" ]
2024-01-24T08:16:24+00:00
{"pretty_name": "Evaluation run of mlabonne/Darewin-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [mlabonne/Darewin-7B](https://huggingface.co/mlabonne/Darewin-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mlabonne__Darewin-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T08:13:57.054758](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Darewin-7B/blob/main/results_2024-01-24T08-13-57.054758.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6560164129347533,\n \"acc_stderr\": 0.031962689434190776,\n \"acc_norm\": 0.6562048573975326,\n \"acc_norm_stderr\": 0.03261805811687008,\n \"mc1\": 0.4394124847001224,\n \"mc1_stderr\": 0.01737452048251371,\n \"mc2\": 0.6038301238600606,\n \"mc2_stderr\": 0.015402053910396342\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6621160409556314,\n \"acc_stderr\": 0.013822047922283516,\n \"acc_norm\": 0.6860068259385665,\n \"acc_norm_stderr\": 0.013562691224726304\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6786496713802032,\n \"acc_stderr\": 0.004660405565338756,\n \"acc_norm\": 0.8621788488348935,\n \"acc_norm_stderr\": 0.003440076775300576\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7916666666666666,\n \"acc_stderr\": 0.033961162058453336,\n \"acc_norm\": 0.7916666666666666,\n \"acc_norm_stderr\": 0.033961162058453336\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.02535574126305526,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.02535574126305526\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083522,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083522\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6846153846153846,\n \"acc_stderr\": 0.023559646983189946,\n \"acc_norm\": 0.6846153846153846,\n \"acc_norm_stderr\": 0.023559646983189946\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.02911661760608301,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.02911661760608301\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188704,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188704\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8568807339449541,\n \"acc_stderr\": 0.015014462497168589,\n \"acc_norm\": 0.8568807339449541,\n \"acc_norm_stderr\": 0.015014462497168589\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5416666666666666,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8284313725490197,\n \"acc_stderr\": 0.02646056956124064,\n \"acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.02646056956124064\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8378033205619413,\n \"acc_stderr\": 0.013182222616720887,\n \"acc_norm\": 0.8378033205619413,\n \"acc_norm_stderr\": 0.013182222616720887\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7485549132947977,\n \"acc_stderr\": 0.023357365785874037,\n \"acc_norm\": 0.7485549132947977,\n \"acc_norm_stderr\": 0.023357365785874037\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37318435754189944,\n \"acc_stderr\": 0.016175692013381954,\n \"acc_norm\": 0.37318435754189944,\n \"acc_norm_stderr\": 0.016175692013381954\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242553,\n \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242553\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n \"acc_stderr\": 0.025922371788818763,\n \"acc_norm\": 0.7041800643086816,\n \"acc_norm_stderr\": 0.025922371788818763\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7561728395061729,\n \"acc_stderr\": 0.02389187954195961,\n \"acc_norm\": 0.7561728395061729,\n \"acc_norm_stderr\": 0.02389187954195961\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47196870925684486,\n \"acc_stderr\": 0.012750151802922436,\n \"acc_norm\": 0.47196870925684486,\n \"acc_norm_stderr\": 0.012750151802922436\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7169117647058824,\n \"acc_stderr\": 0.02736586113151381,\n \"acc_norm\": 0.7169117647058824,\n \"acc_norm_stderr\": 0.02736586113151381\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.01890101532209309,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.01890101532209309\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.02553843336857833,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.02553843336857833\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4394124847001224,\n \"mc1_stderr\": 0.01737452048251371,\n \"mc2\": 0.6038301238600606,\n \"mc2_stderr\": 0.015402053910396342\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.797947908445146,\n \"acc_stderr\": 0.011285013754047443\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7103866565579985,\n \"acc_stderr\": 0.012493927348659629\n }\n}\n```", "repo_url": "https://huggingface.co/mlabonne/Darewin-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|arc:challenge|25_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|gsm8k|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hellaswag|10_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T08-13-57.054758.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["**/details_harness|winogrande|5_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T08-13-57.054758.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T08_13_57.054758", "path": ["results_2024-01-24T08-13-57.054758.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T08-13-57.054758.parquet"]}]}]}
2024-01-24T08:16:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of mlabonne/Darewin-7B Dataset automatically created during the evaluation run of model mlabonne/Darewin-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T08:13:57.054758(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of mlabonne/Darewin-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Darewin-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T08:13:57.054758(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of mlabonne/Darewin-7B\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Darewin-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T08:13:57.054758(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
06c3fd837b51031cd2d8eea73485376a633d3428
# Dataset of 2b/2B/2B/2B (Nikke: Goddess of Victory) This is the dataset of 2b/2B/2B/2B (Nikke: Goddess of Victory), containing 500 images and their tags. The core tags of this character are `short_hair, white_hair, hairband, breasts, mole, mole_under_mouth, black_hairband, medium_breasts, covered_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 704.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/2b_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 408.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/2b_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1125 | 788.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/2b_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 626.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/2b_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1125 | 1.08 GiB | [Download](https://huggingface.co/datasets/CyberHarem/2b_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/2b_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, blue_eyes, alternate_costume, black_skirt, looking_at_viewer, white_shirt, chair, collarbone, contemporary, full_body, no_blindfold, pantyhose, pencil_skirt, sitting | | 1 | 27 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_blindfold, black_thighhighs, katana, solo, black_dress, cleavage_cutout, holding_sword, black_gloves, feather-trimmed_sleeves, thigh_boots, juliet_sleeves, leotard, black_footwear, high_heel_boots, thighhighs_under_boots | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_blindfold, black_dress, black_gloves, black_thighhighs, cleavage_cutout, facing_viewer, feather-trimmed_sleeves, holding_sword, juliet_sleeves, katana, pink_lips, side_slit, solo, thigh_boots, turtleneck, vambraces, black_footwear, cowboy_shot, short_dress, closed_mouth, nose, thighhighs_under_boots, white_leotard, bangs, parted_lips, ribbon, standing | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_blindfold, black_dress, cleavage_cutout, juliet_sleeves, solo, black_gloves, black_thighhighs, feather-trimmed_sleeves, boots | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, black_blindfold, black_dress, cleavage_cutout, closed_mouth, facing_viewer, juliet_sleeves, solo, turtleneck, upper_body, pink_lips, simple_background, sword | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, black_dress, blue_eyes, cleavage_cutout, juliet_sleeves, no_blindfold, solo, upper_body, closed_mouth, turtleneck, hair_between_eyes, lips, looking_at_viewer | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blue_eyes | alternate_costume | black_skirt | looking_at_viewer | white_shirt | chair | collarbone | contemporary | full_body | no_blindfold | pantyhose | pencil_skirt | sitting | black_blindfold | black_thighhighs | katana | black_dress | cleavage_cutout | holding_sword | black_gloves | feather-trimmed_sleeves | thigh_boots | juliet_sleeves | leotard | black_footwear | high_heel_boots | thighhighs_under_boots | facing_viewer | pink_lips | side_slit | turtleneck | vambraces | cowboy_shot | short_dress | closed_mouth | nose | white_leotard | bangs | parted_lips | ribbon | standing | boots | upper_body | simple_background | sword | hair_between_eyes | lips | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:------------|:--------------------|:--------------|:--------------------|:--------------|:--------|:-------------|:---------------|:------------|:---------------|:------------|:---------------|:----------|:------------------|:-------------------|:---------|:--------------|:------------------|:----------------|:---------------|:--------------------------|:--------------|:-----------------|:----------|:-----------------|:------------------|:-------------------------|:----------------|:------------|:------------|:-------------|:------------|:--------------|:--------------|:---------------|:-------|:----------------|:--------|:--------------|:---------|:-----------|:--------|:-------------|:--------------------|:--------|:--------------------|:-------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 27 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | | | | | | | | | | | X | X | | X | X | | X | X | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | | | | | | | | | | | | X | | | X | X | | | | | X | | | | | X | X | | X | | | | X | | | | | | | | X | X | X | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | | | X | | | | | | X | | | | | | | X | X | | | | | X | | | | | | | | X | | | | X | | | | | | | | X | | | X | X |
CyberHarem/2b_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T08:30:43+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T10:17:11+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of 2b/2B/2B/2B (Nikke: Goddess of Victory) ================================================== This is the dataset of 2b/2B/2B/2B (Nikke: Goddess of Victory), containing 500 images and their tags. The core tags of this character are 'short\_hair, white\_hair, hairband, breasts, mole, mole\_under\_mouth, black\_hairband, medium\_breasts, covered\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
ef7ef1942c1e345ced7670446a9ece00c563cb35
# Dataset of eunhwa/ウンファ/银华/은화 (Nikke: Goddess of Victory) This is the dataset of eunhwa/ウンファ/银华/은화 (Nikke: Goddess of Victory), containing 43 images and their tags. The core tags of this character are `black_hair, bangs, long_hair, purple_eyes, breasts, hat, multicolored_hair, beret`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 43 | 52.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eunhwa_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 43 | 31.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eunhwa_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 102 | 64.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eunhwa_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 43 | 48.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eunhwa_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 102 | 89.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eunhwa_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/eunhwa_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, black_gloves, black_headwear, fingerless_gloves, black_shirt, black_thighhighs, long_sleeves, medium_breasts, purple_hair, sailor_collar, black_jacket, black_panties, closed_mouth, cowboy_shot, crop_top, holding_weapon, neckerchief, rifle, thighs, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | black_gloves | black_headwear | fingerless_gloves | black_shirt | black_thighhighs | long_sleeves | medium_breasts | purple_hair | sailor_collar | black_jacket | black_panties | closed_mouth | cowboy_shot | crop_top | holding_weapon | neckerchief | rifle | thighs | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------|:-----------------|:--------------------|:--------------|:-------------------|:---------------|:-----------------|:--------------|:----------------|:---------------|:----------------|:---------------|:--------------|:-----------|:-----------------|:--------------|:--------|:---------|:-------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/eunhwa_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T08:31:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T08:40:34+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of eunhwa/ウンファ/银华/은화 (Nikke: Goddess of Victory) ======================================================== This is the dataset of eunhwa/ウンファ/银华/은화 (Nikke: Goddess of Victory), containing 43 images and their tags. The core tags of this character are 'black\_hair, bangs, long\_hair, purple\_eyes, breasts, hat, multicolored\_hair, beret', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
8d399a263450ed448881009bda381de51ed43644
# Dataset of tove/トーブ/托比/토브 (Nikke: Goddess of Victory) This is the dataset of tove/トーブ/托比/토브 (Nikke: Goddess of Victory), containing 31 images and their tags. The core tags of this character are `blonde_hair, long_hair, blue_eyes, braid, breasts, bangs, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 31 | 45.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tove_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 31 | 22.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tove_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 82 | 50.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tove_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 31 | 38.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tove_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 82 | 76.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tove_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/tove_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, ass, blush, hood, looking_back, smile, skin_tight, black_jacket, from_behind, open_mouth, orange_bodysuit, white_background | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blush, looking_at_viewer, smile, solo, covered_navel, simple_background, white_background, multicolored_bodysuit, one_eye_closed, orange_bodysuit, skin_tight, black_jacket, full_body, high_heels, long_sleeves, medium_breasts, open_jacket | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blush, 1boy, hetero, mosaic_censoring, ass, penis, completely_nude, open_mouth, pussy, solo_focus, sweat, vaginal, hair_between_eyes, looking_at_viewer, anus, closed_eyes, cum, nipples, sex_from_behind, straddling | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | ass | blush | hood | looking_back | smile | skin_tight | black_jacket | from_behind | open_mouth | orange_bodysuit | white_background | covered_navel | simple_background | multicolored_bodysuit | one_eye_closed | full_body | high_heels | long_sleeves | medium_breasts | open_jacket | 1boy | hetero | mosaic_censoring | penis | completely_nude | pussy | solo_focus | sweat | vaginal | hair_between_eyes | anus | closed_eyes | cum | nipples | sex_from_behind | straddling | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:------|:--------|:-------|:---------------|:--------|:-------------|:---------------|:--------------|:-------------|:------------------|:-------------------|:----------------|:--------------------|:------------------------|:-----------------|:------------|:-------------|:---------------|:-----------------|:--------------|:-------|:---------|:-------------------|:--------|:------------------|:--------|:-------------|:--------|:----------|:--------------------|:-------|:--------------|:------|:----------|:------------------|:-------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | | | X | X | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | X | X | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/tove_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T08:31:20+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T08:38:24+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of tove/トーブ/托比/토브 (Nikke: Goddess of Victory) ===================================================== This is the dataset of tove/トーブ/托比/토브 (Nikke: Goddess of Victory), containing 31 images and their tags. The core tags of this character are 'blonde\_hair, long\_hair, blue\_eyes, braid, breasts, bangs, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
1c8fcc5ee97559676f86fe0f52a7784fc34a61d8
# yujiepan/awq-model-zoo Here are some pre-computed awq information (scales & clips) used in [llm-awq](https://github.com/mit-han-lab/llm-awq). ## Scripts - Install the forked `llm-awq` at [https://github.com/yujiepan-work/llm-awq/tree/a41a08e79d8eb3d6335485b3625410af22a74426](https://github.com/yujiepan-work/llm-awq/tree/a41a08e79d8eb3d6335485b3625410af22a74426). Note: works with transformers==4.35.2 - Generating awq-info.pt: ```bash python do_awq.py --model_id mistralai/Mistral-7B-v0.1 --w_bit 8 --q_group_size 128 --dump_awq ./awq-info.pt ``` - Load a quantized model: You can use the offical repo to get a fake/real quantized model. Alternatively, you can load a fake-quantized model: ```python from do_awq import FakeAWQModel FakeAWQModel.from_pretrained('mistralai/Mistral-7B-v0.1', awq_meta_path='./awq-info.pt', output_folder='./tmp/') ``` Note: the code is not in good shape. ## Related links - <https://huggingface.co/datasets/mit-han-lab/awq-model-zoo>
yujiepan/awq-model-zoo
[ "awq", "llm", "quantization", "region:us" ]
2024-01-24T08:45:05+00:00
{"tags": ["awq", "llm", "quantization"]}
2024-01-25T15:38:14+00:00
[]
[]
TAGS #awq #llm #quantization #region-us
# yujiepan/awq-model-zoo Here are some pre-computed awq information (scales & clips) used in llm-awq. ## Scripts - Install the forked 'llm-awq' at URL Note: works with transformers==4.35.2 - Generating URL: - Load a quantized model: You can use the offical repo to get a fake/real quantized model. Alternatively, you can load a fake-quantized model: Note: the code is not in good shape. ## Related links - <URL
[ "# yujiepan/awq-model-zoo\n\nHere are some pre-computed awq information (scales & clips) used in llm-awq.", "## Scripts\n\n- Install the forked 'llm-awq' at URL Note: works with transformers==4.35.2\n\n- Generating URL:\n\n\n\n- Load a quantized model: You can use the offical repo to get a fake/real quantized model. Alternatively, you can load a fake-quantized model:\n\n\n\nNote: the code is not in good shape.", "## Related links\n\n- <URL" ]
[ "TAGS\n#awq #llm #quantization #region-us \n", "# yujiepan/awq-model-zoo\n\nHere are some pre-computed awq information (scales & clips) used in llm-awq.", "## Scripts\n\n- Install the forked 'llm-awq' at URL Note: works with transformers==4.35.2\n\n- Generating URL:\n\n\n\n- Load a quantized model: You can use the offical repo to get a fake/real quantized model. Alternatively, you can load a fake-quantized model:\n\n\n\nNote: the code is not in good shape.", "## Related links\n\n- <URL" ]
0966f82d0490e276a3d90bfd8574d387235ab003
<s> <INST> <<SYS>> This is alta view <</SYS>> What is Policy? </INST> Policy is a set of rules to protect an asset </s>
firojm57/first-dataset
[ "license:mit", "region:us" ]
2024-01-24T08:45:30+00:00
{"license": "mit"}
2024-01-24T11:09:31+00:00
[]
[]
TAGS #license-mit #region-us
<s> <INST> <<SYS>> This is alta view <</SYS>> What is Policy? </INST> Policy is a set of rules to protect an asset </s>
[]
[ "TAGS\n#license-mit #region-us \n" ]
98955dc65ef8bc1f290da05802ee541f700c9baa
# Dataset of vesti/ベスティー/贝斯蒂/베스티 (Nikke: Goddess of Victory) This is the dataset of vesti/ベスティー/贝斯蒂/베스티 (Nikke: Goddess of Victory), containing 17 images and their tags. The core tags of this character are `bangs, blue_eyes, short_hair, grey_hair, hat, beret, black_headwear, breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 17 | 21.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/vesti_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 17 | 11.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/vesti_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 35 | 24.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/vesti_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 17 | 18.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/vesti_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 35 | 33.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/vesti_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/vesti_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, open_mouth, blush, red_necktie, black_gloves, thighhighs, fingerless_gloves, holding, jacket | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | open_mouth | blush | red_necktie | black_gloves | thighhighs | fingerless_gloves | holding | jacket | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:-------------|:--------|:--------------|:---------------|:-------------|:--------------------|:----------|:---------| | 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/vesti_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T08:48:12+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T08:53:15+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of vesti/ベスティー/贝斯蒂/베스티 (Nikke: Goddess of Victory) ========================================================== This is the dataset of vesti/ベスティー/贝斯蒂/베스티 (Nikke: Goddess of Victory), containing 17 images and their tags. The core tags of this character are 'bangs, blue\_eyes, short\_hair, grey\_hair, hat, beret, black\_headwear, breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
5f2edcad74fd112186ba68a0aa546bfe817a2945
# Dataset of mica/ミカ/米卡/미카 (Nikke: Goddess of Victory) This is the dataset of mica/ミカ/米卡/미카 (Nikke: Goddess of Victory), containing 24 images and their tags. The core tags of this character are `blue_eyes, brown_hair, long_hair, hairband, bangs`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 24 | 35.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mica_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 24 | 20.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mica_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 58 | 42.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mica_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 24 | 31.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mica_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 58 | 59.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mica_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/mica_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, solo, black_gloves, fingerless_gloves, smile, open_mouth, blush, apron, holding, short_sleeves | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | black_gloves | fingerless_gloves | smile | open_mouth | blush | apron | holding | short_sleeves | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------|:--------------------|:--------|:-------------|:--------|:--------|:----------|:----------------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/mica_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T08:48:34+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T08:54:04+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of mica/ミカ/米卡/미카 (Nikke: Goddess of Victory) ==================================================== This is the dataset of mica/ミカ/米卡/미카 (Nikke: Goddess of Victory), containing 24 images and their tags. The core tags of this character are 'blue\_eyes, brown\_hair, long\_hair, hairband, bangs', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
fb3d0103b9fd41afcf0630e2b1c844eb8c2200af
# Dataset of rei/ライ/莱伊/라이 (Nikke: Goddess of Victory) This is the dataset of rei/ライ/莱伊/라이 (Nikke: Goddess of Victory), containing 14 images and their tags. The core tags of this character are `hat, long_hair, purple_hair, ahoge, bangs, blue_headwear, beret, bow, one_side_up, purple_headwear, red_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 14 | 18.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rei_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 14 | 11.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rei_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 31 | 22.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rei_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 14 | 16.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rei_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 31 | 34.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rei_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/rei_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | blush, long_sleeves, open_mouth, 1girl, bag, jacket, solo, looking_at_viewer, bird, plaid_skirt, pleated_skirt, standing, white_shirt, black_footwear, collared_shirt, shoes, stuffed_animal, white_socks, :d, multiple_girls, school_uniform | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blush | long_sleeves | open_mouth | 1girl | bag | jacket | solo | looking_at_viewer | bird | plaid_skirt | pleated_skirt | standing | white_shirt | black_footwear | collared_shirt | shoes | stuffed_animal | white_socks | :d | multiple_girls | school_uniform | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------------|:--------|:------|:---------|:-------|:--------------------|:-------|:--------------|:----------------|:-----------|:--------------|:-----------------|:-----------------|:--------|:-----------------|:--------------|:-----|:-----------------|:-----------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/rei_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:00:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:03:56+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of rei/ライ/莱伊/라이 (Nikke: Goddess of Victory) =================================================== This is the dataset of rei/ライ/莱伊/라이 (Nikke: Goddess of Victory), containing 14 images and their tags. The core tags of this character are 'hat, long\_hair, purple\_hair, ahoge, bangs, blue\_headwear, beret, bow, one\_side\_up, purple\_headwear, red\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
9a15bce1088a986b2290905dca45be04ac0dc3d7
# Dataset of novel/ノベル/诺薇儿/노벨 (Nikke: Goddess of Victory) This is the dataset of novel/ノベル/诺薇儿/노벨 (Nikke: Goddess of Victory), containing 27 images and their tags. The core tags of this character are `bangs, hat, green_eyes, hair_bun, brown_hair, breasts, blonde_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 27 | 43.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/novel_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 27 | 21.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/novel_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 64 | 46.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/novel_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 27 | 35.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/novel_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 64 | 72.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/novel_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/novel_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 27 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, blush, open_mouth, thighhighs, smile, holding, skirt, +_+, long_sleeves, simple_background, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | blush | open_mouth | thighhighs | smile | holding | skirt | +_+ | long_sleeves | simple_background | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------|:-------------|:-------------|:--------|:----------|:--------|:------|:---------------|:--------------------|:-------------------| | 0 | 27 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/novel_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:00:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:06:53+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of novel/ノベル/诺薇儿/노벨 (Nikke: Goddess of Victory) ======================================================= This is the dataset of novel/ノベル/诺薇儿/노벨 (Nikke: Goddess of Victory), containing 27 images and their tags. The core tags of this character are 'bangs, hat, green\_eyes, hair\_bun, brown\_hair, breasts, blonde\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
7cb1b7179f9288c0ae080decce38e2e39abe1c99
# Dataset of leona/レオナ/莱昂纳/레오나 (Nikke: Goddess of Victory) This is the dataset of leona/レオナ/莱昂纳/레오나 (Nikke: Goddess of Victory), containing 33 images and their tags. The core tags of this character are `animal_ears, breasts, long_hair, pink_hair, large_breasts, bangs, yellow_eyes, animal_ear_fluff`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 33 | 53.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leona_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 33 | 25.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leona_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 82 | 57.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leona_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 33 | 45.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leona_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 82 | 89.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/leona_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/leona_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------| | 0 | 33 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, solo, looking_at_viewer, detached_sleeves, bare_shoulders, smile, open_mouth, sideboob, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | looking_at_viewer | detached_sleeves | bare_shoulders | smile | open_mouth | sideboob | white_background | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------------------|:-------------------|:-----------------|:--------|:-------------|:-----------|:-------------------| | 0 | 33 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X |
CyberHarem/leona_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:00:43+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:07:20+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of leona/レオナ/莱昂纳/레오나 (Nikke: Goddess of Victory) ======================================================== This is the dataset of leona/レオナ/莱昂纳/레오나 (Nikke: Goddess of Victory), containing 33 images and their tags. The core tags of this character are 'animal\_ears, breasts, long\_hair, pink\_hair, large\_breasts, bangs, yellow\_eyes, animal\_ear\_fluff', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
bf18cd7aa8c2af11a59803614e854797a029c04e
# SQuAD ID Reorganized and corrected SQuAD ID dataset. ## Source Paper Title: Sequence-to-Sequence Learning for Indonesian Automatic Question Generator Venue: ICAICTA 2020 ## Data Correction I found many incorrect `answer_start` from the original data, where the extracted subtext from context using the position does not match the answer text. To correct this, a script is used to attempt to find answer text in context using Python string find() method. If found, the `answer_start` will be replaced, and the original answer_start is copied to `answer_start_original` as backup. If not found, the `answer_start` will be null. Additionally, if all answer in `answers` cannot be found, the value of field `is_all_answer_missing` will be true as an indicator. ## References - https://github.com/IndoNLP/nusa-crowd/blob/master/nusacrowd/nusa_datasets/squad_id/squad_id.py - https://ieeexplore.ieee.org/document/9429032 - https://indonlp.github.io/nusa-catalogue/card.html?squad_id - https://github.com/FerdiantJoshua/question-generator ## Note I uploaded this repository simply because the original nusa-crowd version of SQuAD ID cannot be used due to error when loaded.
sagaekakristi/squad_id_muis_2020
[ "task_categories:question-answering", "language:id", "license:unknown", "question-answering", "extractive-question-answering", "region:us" ]
2024-01-24T09:08:01+00:00
{"language": ["id"], "license": "unknown", "task_categories": ["question-answering"], "pretty_name": "SQuAD ID", "tags": ["question-answering", "extractive-question-answering"], "dataset_info": {"features": [{"name": "origin_id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "dtype": "string"}, {"name": "is_impossible", "dtype": "bool"}, {"name": "title", "dtype": "string"}, {"name": "failure_count", "dtype": "int64"}, {"name": "paragraph_postags", "dtype": "string"}, {"name": "paragraph_entities", "dtype": "string"}, {"name": "qa_postags", "dtype": "string"}, {"name": "qa_entities", "dtype": "string"}, {"name": "plausible_answers", "dtype": "string"}, {"name": "indonesian_answers", "dtype": "string"}, {"name": "indonesian_plausible_answers", "dtype": "string"}, {"name": "is_all_answer_missing", "dtype": "bool"}, {"name": "is_data_invalid", "dtype": "bool"}, {"name": "origin_split", "dtype": "string"}, {"name": "exclude_suggestion", "dtype": "bool"}]}}
2024-01-29T10:48:12+00:00
[]
[ "id" ]
TAGS #task_categories-question-answering #language-Indonesian #license-unknown #question-answering #extractive-question-answering #region-us
# SQuAD ID Reorganized and corrected SQuAD ID dataset. ## Source Paper Title: Sequence-to-Sequence Learning for Indonesian Automatic Question Generator Venue: ICAICTA 2020 ## Data Correction I found many incorrect 'answer_start' from the original data, where the extracted subtext from context using the position does not match the answer text. To correct this, a script is used to attempt to find answer text in context using Python string find() method. If found, the 'answer_start' will be replaced, and the original answer_start is copied to 'answer_start_original' as backup. If not found, the 'answer_start' will be null. Additionally, if all answer in 'answers' cannot be found, the value of field 'is_all_answer_missing' will be true as an indicator. ## References - URL - URL - URL - URL ## Note I uploaded this repository simply because the original nusa-crowd version of SQuAD ID cannot be used due to error when loaded.
[ "# SQuAD ID\n\nReorganized and corrected SQuAD ID dataset.", "## Source\n\nPaper Title: Sequence-to-Sequence Learning for Indonesian Automatic Question Generator\n\nVenue: ICAICTA 2020", "## Data Correction\n\nI found many incorrect 'answer_start' from the original data, where the extracted subtext from context using the position does not match the answer text. To correct this, a script is used to attempt to find answer text in context using Python string find() method. If found, the 'answer_start' will be replaced, and the original answer_start is copied to 'answer_start_original' as backup. If not found, the 'answer_start' will be null. Additionally, if all answer in 'answers' cannot be found, the value of field 'is_all_answer_missing' will be true as an indicator.", "## References\n\n- URL\n- URL\n- URL\n- URL", "## Note\n\nI uploaded this repository simply because the original nusa-crowd version of SQuAD ID cannot be used due to error when loaded." ]
[ "TAGS\n#task_categories-question-answering #language-Indonesian #license-unknown #question-answering #extractive-question-answering #region-us \n", "# SQuAD ID\n\nReorganized and corrected SQuAD ID dataset.", "## Source\n\nPaper Title: Sequence-to-Sequence Learning for Indonesian Automatic Question Generator\n\nVenue: ICAICTA 2020", "## Data Correction\n\nI found many incorrect 'answer_start' from the original data, where the extracted subtext from context using the position does not match the answer text. To correct this, a script is used to attempt to find answer text in context using Python string find() method. If found, the 'answer_start' will be replaced, and the original answer_start is copied to 'answer_start_original' as backup. If not found, the 'answer_start' will be null. Additionally, if all answer in 'answers' cannot be found, the value of field 'is_all_answer_missing' will be true as an indicator.", "## References\n\n- URL\n- URL\n- URL\n- URL", "## Note\n\nI uploaded this repository simply because the original nusa-crowd version of SQuAD ID cannot be used due to error when loaded." ]
f31801361e954bd5d7f66fb0dd54f87dc6a05d73
# Dataset of epinel/エピネル/艾菲涅尔/에피넬 (Nikke: Goddess of Victory) This is the dataset of epinel/エピネル/艾菲涅尔/에피넬 (Nikke: Goddess of Victory), containing 20 images and their tags. The core tags of this character are `bangs, long_hair, multicolored_hair, very_long_hair, ahoge, green_eyes, ponytail, streaked_hair, facial_mark, grey_hair, two-tone_hair, goggles_on_head, green_hair, bandaid_on_face, blue_hair, hair_between_eyes, hair_ornament, white_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 20 | 45.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/epinel_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 20 | 22.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/epinel_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 51 | 49.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/epinel_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 20 | 39.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/epinel_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 51 | 74.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/epinel_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/epinel_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | open_jacket, hood_down, 1girl, solo, bandaid, white_shirt, blue_jacket, hooded_jacket, pleated_skirt, black_skirt, looking_at_viewer, white_socks, bubble_blowing, chewing_gum, drawstring, kneehighs, puffy_long_sleeves, shoes, goggles, holding, standing, white_hoodie | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | open_jacket | hood_down | 1girl | solo | bandaid | white_shirt | blue_jacket | hooded_jacket | pleated_skirt | black_skirt | looking_at_viewer | white_socks | bubble_blowing | chewing_gum | drawstring | kneehighs | puffy_long_sleeves | shoes | goggles | holding | standing | white_hoodie | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:------------|:--------|:-------|:----------|:--------------|:--------------|:----------------|:----------------|:--------------|:--------------------|:--------------|:-----------------|:--------------|:-------------|:------------|:---------------------|:--------|:----------|:----------|:-----------|:---------------| | 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/epinel_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:26:51+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:32:17+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of epinel/エピネル/艾菲涅尔/에피넬 (Nikke: Goddess of Victory) =========================================================== This is the dataset of epinel/エピネル/艾菲涅尔/에피넬 (Nikke: Goddess of Victory), containing 20 images and their tags. The core tags of this character are 'bangs, long\_hair, multicolored\_hair, very\_long\_hair, ahoge, green\_eyes, ponytail, streaked\_hair, facial\_mark, grey\_hair, two-tone\_hair, goggles\_on\_head, green\_hair, bandaid\_on\_face, blue\_hair, hair\_between\_eyes, hair\_ornament, white\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
dbb0c51a51dd2891c13ffc56e2dd0717d2aa3034
# Dataset of isabel/イサベル/伊莎贝尔/이사벨 (Nikke: Goddess of Victory) This is the dataset of isabel/イサベル/伊莎贝尔/이사벨 (Nikke: Goddess of Victory), containing 18 images and their tags. The core tags of this character are `breasts, long_hair, bangs, large_breasts, purple_eyes, purple_hair, red_eyes, black_hair, headgear, mechanical_wings, wings`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 18 | 31.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isabel_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 18 | 13.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isabel_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 42 | 30.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isabel_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 18 | 25.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isabel_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 42 | 50.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isabel_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/isabel_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, looking_at_viewer, smile, solo, white_background, blush, simple_background, ass, blue_hair, from_behind, leotard, looking_back, parted_lips, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | looking_at_viewer | smile | solo | white_background | blush | simple_background | ass | blue_hair | from_behind | leotard | looking_back | parted_lips | upper_body | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------------|:--------|:-------|:-------------------|:--------|:--------------------|:------|:------------|:--------------|:----------|:---------------|:--------------|:-------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/isabel_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:27:03+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:33:35+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of isabel/イサベル/伊莎贝尔/이사벨 (Nikke: Goddess of Victory) =========================================================== This is the dataset of isabel/イサベル/伊莎贝尔/이사벨 (Nikke: Goddess of Victory), containing 18 images and their tags. The core tags of this character are 'breasts, long\_hair, bangs, large\_breasts, purple\_eyes, purple\_hair, red\_eyes, black\_hair, headgear, mechanical\_wings, wings', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
208b543b92e5e6e51ba99e6e5cc2f97a4ec019c2
assembly데이터 json
Bong9/assemblydata
[ "region:us" ]
2024-01-24T09:31:45+00:00
{}
2024-01-27T16:16:38+00:00
[]
[]
TAGS #region-us
assembly데이터 json
[]
[ "TAGS\n#region-us \n" ]
2dd15cbda21d4e81711bc7dec352479ec8fb0b3c
# Dataset of biscuit/ビスケット/饼干/비스킷 (Nikke: Goddess of Victory) This is the dataset of biscuit/ビスケット/饼干/비스킷 (Nikke: Goddess of Victory), containing 19 images and their tags. The core tags of this character are `hair_ornament, yellow_eyes, animal_ears, tail, ahoge, bow, breasts, brown_hair, medium_hair, bangs, dog_ears, hair_bow, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 19 | 38.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/biscuit_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 19 | 18.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/biscuit_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 48 | 43.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/biscuit_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 19 | 32.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/biscuit_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 48 | 66.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/biscuit_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/biscuit_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, blush, open_mouth, long_sleeves, looking_at_viewer, smile, dog, dress, virtual_youtuber, shirt | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | open_mouth | long_sleeves | looking_at_viewer | smile | dog | dress | virtual_youtuber | shirt | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:-------------|:---------------|:--------------------|:--------|:------|:--------|:-------------------|:--------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/biscuit_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:33:36+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:38:27+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of biscuit/ビスケット/饼干/비스킷 (Nikke: Goddess of Victory) =========================================================== This is the dataset of biscuit/ビスケット/饼干/비스킷 (Nikke: Goddess of Victory), containing 19 images and their tags. The core tags of this character are 'hair\_ornament, yellow\_eyes, animal\_ears, tail, ahoge, bow, breasts, brown\_hair, medium\_hair, bangs, dog\_ears, hair\_bow, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
16a933ba9431c40e896d1fff1ce77441ce582067
# Dataset of harran/ハラン/哈兰/하란 (Nikke: Goddess of Victory) This is the dataset of harran/ハラン/哈兰/하란 (Nikke: Goddess of Victory), containing 14 images and their tags. The core tags of this character are `breasts, long_hair, bangs, black_hair, large_breasts, horns, very_long_hair, blue_eyes, hair_between_eyes, headgear, purple_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 14 | 31.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harran_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 14 | 14.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harran_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 36 | 29.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harran_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 14 | 26.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harran_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 36 | 47.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harran_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/harran_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, smile, bare_shoulders, thighs, standing, closed_mouth, leotard, thighhighs, elbow_gloves, holding_weapon, parted_lips | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | smile | bare_shoulders | thighs | standing | closed_mouth | leotard | thighhighs | elbow_gloves | holding_weapon | parted_lips | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------|:-----------------|:---------|:-----------|:---------------|:----------|:-------------|:---------------|:-----------------|:--------------| | 0 | 14 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/harran_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:33:38+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:37:52+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of harran/ハラン/哈兰/하란 (Nikke: Goddess of Victory) ======================================================= This is the dataset of harran/ハラン/哈兰/하란 (Nikke: Goddess of Victory), containing 14 images and their tags. The core tags of this character are 'breasts, long\_hair, bangs, black\_hair, large\_breasts, horns, very\_long\_hair, blue\_eyes, hair\_between\_eyes, headgear, purple\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
2e88f246c047332059a993c02c62ef1275141986
# Dataset of sugar/シュガー/舒格/슈가 (Nikke: Goddess of Victory) This is the dataset of sugar/シュガー/舒格/슈가 (Nikke: Goddess of Victory), containing 16 images and their tags. The core tags of this character are `breasts, white_hair, short_hair, bangs, red_eyes, ahoge, earrings, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 16 | 24.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sugar_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 16 | 13.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sugar_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 33 | 24.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sugar_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 16 | 21.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sugar_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 33 | 33.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sugar_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/sugar_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, black_jacket, jewelry, belt, black_pants, closed_mouth, open_jacket, white_shirt, black_dress, long_sleeves | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | black_jacket | jewelry | belt | black_pants | closed_mouth | open_jacket | white_shirt | black_dress | long_sleeves | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:---------------|:----------|:-------|:--------------|:---------------|:--------------|:--------------|:--------------|:---------------| | 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/sugar_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:33:39+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:38:05+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of sugar/シュガー/舒格/슈가 (Nikke: Goddess of Victory) ======================================================= This is the dataset of sugar/シュガー/舒格/슈가 (Nikke: Goddess of Victory), containing 16 images and their tags. The core tags of this character are 'breasts, white\_hair, short\_hair, bangs, red\_eyes, ahoge, earrings, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
dcaafc4bf5b44c96a1ced56c0a7d42da5a3a2c45
# GermanRAG 🇩🇪📜🦜 This dataset is derived from the [GermanDPR dataset](https://huggingface.co/datasets/deepset/germandpr) and enhances it by providing fully formulated answers instead of answer spans. It can be used to finetune for retrieval augmented generation tasks (RAG) in German. We deduplicated the original contexts resulting in 2243 unique contexts and repeated the hard negatives of half of them, such that the last third of the total dataset contains only not answerable examples. In contrast to the original dataset the number of contexts per QA pair varies to mimic retrieval results in real world scenarios, resulting in a distribution of positive and hard negative contexts as follows: | # positive contexts | # hard negative contexts | # examples |---|---|--- | 1 | 0 | 562 | 1 | 1 | 562 | 1 | 2 | 561 | 1 | 3 | 558 | 0 | 1 | 375 | 0 | 2 | 373 | 0 | 3 | 371 The passages in the `contexts` list are shuffled and `positive_ctx_idx` marks the index of the positive context. `-1` indicates examples without positive context, which are paired with `"Mit den gegebenen Informationen ist diese Frage nicht zu beantworten."` as answer. Code used to create this dataset can be found [here](https://github.com/rasdani/germanrag). ## Known issues In rare cases hard negatives still provide sufficient information to answer the question. For the last third, we therefore paired hard negatives with random questions, sampled without replacement. ## Acknowledgements Full credit for the original dataset goes to the [authors](https://arxiv.org/abs/2104.12741) of [GermanDPR](https://www.deepset.ai/germanquad). The original dataset is licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) and this derived work therfore inherits the same license. Citation for the original dataset: ``` @misc{möller2021germanquad, title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval}, author={Timo Möller and Julian Risch and Malte Pietsch}, year={2021}, eprint={2104.12741}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` The derived dataset was created for [DiscoResearch](https://huggingface.co/DiscoResearch) by [Daniel Auras](https://huggingface.co/rasdani) with support from [JP Harries](https://huggingface.co/jphme) and [Björn Pluster](https://huggingface.co/bjoernp).
DiscoResearch/germanrag
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_categories:conversational", "task_ids:open-domain-qa", "task_ids:document-retrieval", "task_ids:document-question-answering", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:deepset/germandpr", "language:de", "license:cc-by-4.0", "RAG", "retrieval-augmented-generation", "arxiv:2104.12741", "region:us" ]
2024-01-24T09:40:47+00:00
{"language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["deepset/germandpr"], "task_categories": ["question-answering", "text-retrieval", "conversational"], "task_ids": ["open-domain-qa", "document-retrieval", "document-question-answering"], "pretty_name": "GermanRAG", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "germanrag.jsonl"}]}], "tags": ["RAG", "retrieval-augmented-generation"]}
2024-02-04T17:50:10+00:00
[ "2104.12741" ]
[ "de" ]
TAGS #task_categories-question-answering #task_categories-text-retrieval #task_categories-conversational #task_ids-open-domain-qa #task_ids-document-retrieval #task_ids-document-question-answering #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-deepset/germandpr #language-German #license-cc-by-4.0 #RAG #retrieval-augmented-generation #arxiv-2104.12741 #region-us
GermanRAG 🇩🇪 ============ This dataset is derived from the GermanDPR dataset and enhances it by providing fully formulated answers instead of answer spans. It can be used to finetune for retrieval augmented generation tasks (RAG) in German. We deduplicated the original contexts resulting in 2243 unique contexts and repeated the hard negatives of half of them, such that the last third of the total dataset contains only not answerable examples. In contrast to the original dataset the number of contexts per QA pair varies to mimic retrieval results in real world scenarios, resulting in a distribution of positive and hard negative contexts as follows: # positive contexts: 1, # hard negative contexts: 0, # examples: 562 # positive contexts: 1, # hard negative contexts: 1, # examples: 562 # positive contexts: 1, # hard negative contexts: 2, # examples: 561 # positive contexts: 1, # hard negative contexts: 3, # examples: 558 # positive contexts: 0, # hard negative contexts: 1, # examples: 375 # positive contexts: 0, # hard negative contexts: 2, # examples: 373 # positive contexts: 0, # hard negative contexts: 3, # examples: 371 The passages in the 'contexts' list are shuffled and 'positive\_ctx\_idx' marks the index of the positive context. '-1' indicates examples without positive context, which are paired with '"Mit den gegebenen Informationen ist diese Frage nicht zu beantworten."' as answer. Code used to create this dataset can be found here. Known issues ------------ In rare cases hard negatives still provide sufficient information to answer the question. For the last third, we therefore paired hard negatives with random questions, sampled without replacement. Acknowledgements ---------------- Full credit for the original dataset goes to the authors of GermanDPR. The original dataset is licensed under CC BY-SA 4.0 and this derived work therfore inherits the same license. Citation for the original dataset: The derived dataset was created for DiscoResearch by Daniel Auras with support from JP Harries and Björn Pluster.
[ "# positive contexts: 1, # hard negative contexts: 0, # examples: 562", "# positive contexts: 1, # hard negative contexts: 1, # examples: 562", "# positive contexts: 1, # hard negative contexts: 2, # examples: 561", "# positive contexts: 1, # hard negative contexts: 3, # examples: 558", "# positive contexts: 0, # hard negative contexts: 1, # examples: 375", "# positive contexts: 0, # hard negative contexts: 2, # examples: 373", "# positive contexts: 0, # hard negative contexts: 3, # examples: 371\n\n\nThe passages in the 'contexts' list are shuffled and 'positive\\_ctx\\_idx' marks the index of the positive context.\n'-1' indicates examples without positive context, which are paired with '\"Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.\"' as answer.\n\n\nCode used to create this dataset can be found here.\n\n\nKnown issues\n------------\n\n\nIn rare cases hard negatives still provide sufficient information to answer the question. For the last third, we therefore paired hard negatives with random questions, sampled without replacement.\n\n\nAcknowledgements\n----------------\n\n\nFull credit for the original dataset goes to the authors of GermanDPR.\n\n\nThe original dataset is licensed under CC BY-SA 4.0 and this derived work therfore inherits the same license.\n\n\nCitation for the original dataset:\n\n\nThe derived dataset was created for DiscoResearch by Daniel Auras with support from JP Harries and Björn Pluster." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-retrieval #task_categories-conversational #task_ids-open-domain-qa #task_ids-document-retrieval #task_ids-document-question-answering #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-deepset/germandpr #language-German #license-cc-by-4.0 #RAG #retrieval-augmented-generation #arxiv-2104.12741 #region-us \n", "# positive contexts: 1, # hard negative contexts: 0, # examples: 562", "# positive contexts: 1, # hard negative contexts: 1, # examples: 562", "# positive contexts: 1, # hard negative contexts: 2, # examples: 561", "# positive contexts: 1, # hard negative contexts: 3, # examples: 558", "# positive contexts: 0, # hard negative contexts: 1, # examples: 375", "# positive contexts: 0, # hard negative contexts: 2, # examples: 373", "# positive contexts: 0, # hard negative contexts: 3, # examples: 371\n\n\nThe passages in the 'contexts' list are shuffled and 'positive\\_ctx\\_idx' marks the index of the positive context.\n'-1' indicates examples without positive context, which are paired with '\"Mit den gegebenen Informationen ist diese Frage nicht zu beantworten.\"' as answer.\n\n\nCode used to create this dataset can be found here.\n\n\nKnown issues\n------------\n\n\nIn rare cases hard negatives still provide sufficient information to answer the question. For the last third, we therefore paired hard negatives with random questions, sampled without replacement.\n\n\nAcknowledgements\n----------------\n\n\nFull credit for the original dataset goes to the authors of GermanDPR.\n\n\nThe original dataset is licensed under CC BY-SA 4.0 and this derived work therfore inherits the same license.\n\n\nCitation for the original dataset:\n\n\nThe derived dataset was created for DiscoResearch by Daniel Auras with support from JP Harries and Björn Pluster." ]
d47f84d9725210bbcb7a54cfd458c517eae017e4
# Dataset of anchor/アンカー/安克/앵커 (Nikke: Goddess of Victory) This is the dataset of anchor/アンカー/安克/앵커 (Nikke: Goddess of Victory), containing 19 images and their tags. The core tags of this character are `blue_eyes, long_hair, hairband, bangs, very_long_hair, ribbon, blue_hair, hair_ribbon, grey_hair, blue_hairband, bow_hairband`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 19 | 18.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchor_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 19 | 12.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchor_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 39 | 26.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchor_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 19 | 17.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchor_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 39 | 34.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/anchor_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/anchor_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | white_shirt, 1girl, blue_skirt, see-through, short_sleeves, looking_at_viewer, pleated_skirt, blue_sailor_collar, serafuku, solo, choker, neckerchief, holding, wet_shirt, white_thighhighs, blush, multiple_girls, one-piece_swimsuit, open_mouth | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | white_shirt | 1girl | blue_skirt | see-through | short_sleeves | looking_at_viewer | pleated_skirt | blue_sailor_collar | serafuku | solo | choker | neckerchief | holding | wet_shirt | white_thighhighs | blush | multiple_girls | one-piece_swimsuit | open_mouth | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:--------|:-------------|:--------------|:----------------|:--------------------|:----------------|:---------------------|:-----------|:-------|:---------|:--------------|:----------|:------------|:-------------------|:--------|:-----------------|:---------------------|:-------------| | 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/anchor_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:45:00+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:49:31+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of anchor/アンカー/安克/앵커 (Nikke: Goddess of Victory) ======================================================== This is the dataset of anchor/アンカー/安克/앵커 (Nikke: Goddess of Victory), containing 19 images and their tags. The core tags of this character are 'blue\_eyes, long\_hair, hairband, bangs, very\_long\_hair, ribbon, blue\_hair, hair\_ribbon, grey\_hair, blue\_hairband, bow\_hairband', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
5d1e3778336103c8cb9f47456e2e7c7653e40c9d
# Dataset of cocoa/ココア/可可/코코아 (Nikke: Goddess of Victory) This is the dataset of cocoa/ココア/可可/코코아 (Nikke: Goddess of Victory), containing 18 images and their tags. The core tags of this character are `bow, maid_headdress, twintails, long_hair, black_bow, hair_bow, pink_hair, bangs, brown_eyes, breasts, ahoge, hair_between_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 18 | 20.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cocoa_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 18 | 11.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cocoa_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 37 | 24.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cocoa_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 18 | 17.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cocoa_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 37 | 34.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/cocoa_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/cocoa_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | maid, 1girl, solo, black_dress, blush, looking_at_viewer, apron, frills, holding, juliet_sleeves, food | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | maid | 1girl | solo | black_dress | blush | looking_at_viewer | apron | frills | holding | juliet_sleeves | food | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------|:-------|:--------------|:--------|:--------------------|:--------|:---------|:----------|:-----------------|:-------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/cocoa_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:45:01+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:50:08+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of cocoa/ココア/可可/코코아 (Nikke: Goddess of Victory) ======================================================= This is the dataset of cocoa/ココア/可可/코코아 (Nikke: Goddess of Victory), containing 18 images and their tags. The core tags of this character are 'bow, maid\_headdress, twintails, long\_hair, black\_bow, hair\_bow, pink\_hair, bangs, brown\_eyes, breasts, ahoge, hair\_between\_eyes', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
b7b0e98cf850915d4a555ea8e07486288fc8feea
# Dataset of milk/ミルク/米尔克/밀크 (Nikke: Goddess of Victory) This is the dataset of milk/ミルク/米尔克/밀크 (Nikke: Goddess of Victory), containing 16 images and their tags. The core tags of this character are `black_hair, yellow_eyes, short_hair, breasts, bangs, hat, black_headwear, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 16 | 20.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milk_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 16 | 10.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milk_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 37 | 22.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milk_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 16 | 17.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milk_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 37 | 33.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milk_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/milk_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, fingerless_gloves, navel, black_gloves, bare_shoulders, midriff, see-through, simple_background, white_background, closed_mouth, crop_top, shirt, stomach | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | fingerless_gloves | navel | black_gloves | bare_shoulders | midriff | see-through | simple_background | white_background | closed_mouth | crop_top | shirt | stomach | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------------|:--------|:---------------|:-----------------|:----------|:--------------|:--------------------|:-------------------|:---------------|:-----------|:--------|:----------| | 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/milk_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T09:45:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T09:49:54+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of milk/ミルク/米尔克/밀크 (Nikke: Goddess of Victory) ====================================================== This is the dataset of milk/ミルク/米尔克/밀크 (Nikke: Goddess of Victory), containing 16 images and their tags. The core tags of this character are 'black\_hair, yellow\_eyes, short\_hair, breasts, bangs, hat, black\_headwear, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
2a5bf456a5658e8f4b7316188fa7a1c6f6e7e888
# Dataset of a2/A2/A2/A2 (Nikke: Goddess of Victory) This is the dataset of a2/A2/A2/A2 (Nikke: Goddess of Victory), containing 476 images and their tags. The core tags of this character are `long_hair, breasts, blue_eyes, mole, mole_under_mouth, white_hair, hair_between_eyes, medium_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 476 | 679.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/a2_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 476 | 387.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/a2_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1038 | 742.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/a2_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 476 | 604.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/a2_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1038 | 1.03 GiB | [Download](https://huggingface.co/datasets/CyberHarem/a2_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/a2_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, android, armlet, bare_shoulders, black_gloves, elbow_gloves, robot_joints, solo, tank_top, black_shorts, short_shorts, looking_at_viewer, black_thighhighs | | 1 | 35 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, android, black_gloves, elbow_gloves, robot_joints, solo, holding_sword, bare_shoulders, black_shorts, short_shorts, tank_top, looking_at_viewer, armlet, black_thighhighs | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, android, bare_shoulders, black_gloves, black_shorts, elbow_gloves, holding_sword, looking_at_viewer, robot_joints, short_shorts, solo, tank_top, armlet, closed_mouth, collarbone, pink_lips, standing, very_long_hair, black_thighhighs, cowboy_shot, bangs, katana, grey_eyes | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, android, bare_shoulders, black_gloves, elbow_gloves, looking_at_viewer, robot_joints, simple_background, solo, tank_top, upper_body, white_background, armlet, parted_lips | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, android, bare_shoulders, black_gloves, collarbone, elbow_gloves, looking_at_viewer, robot_joints, solo, upper_body, closed_mouth, armlet, pink_lips, black_tank_top | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, android, ass, bare_shoulders, black_gloves, black_shorts, black_thighhighs, elbow_gloves, from_behind, robot_joints, short_shorts, solo, high_heels, holding_sword, looking_back, standing, full_body, looking_at_viewer, thighs | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, black_dress, black_hairband, black_thighhighs, cleavage_cutout, katana, black_gloves, feather-trimmed_sleeves, holding_sword, juliet_sleeves, looking_at_viewer, short_hair, solo, thigh_boots | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | android | armlet | bare_shoulders | black_gloves | elbow_gloves | robot_joints | solo | tank_top | black_shorts | short_shorts | looking_at_viewer | black_thighhighs | holding_sword | closed_mouth | collarbone | pink_lips | standing | very_long_hair | cowboy_shot | bangs | katana | grey_eyes | simple_background | upper_body | white_background | parted_lips | black_tank_top | ass | from_behind | high_heels | looking_back | full_body | thighs | black_dress | black_hairband | cleavage_cutout | feather-trimmed_sleeves | juliet_sleeves | short_hair | thigh_boots | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:---------|:-----------------|:---------------|:---------------|:---------------|:-------|:-----------|:---------------|:---------------|:--------------------|:-------------------|:----------------|:---------------|:-------------|:------------|:-----------|:-----------------|:--------------|:--------|:---------|:------------|:--------------------|:-------------|:-------------------|:--------------|:-----------------|:------|:--------------|:-------------|:---------------|:------------|:---------|:--------------|:-----------------|:------------------|:--------------------------|:-----------------|:-------------|:--------------| | 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 35 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | X | X | X | X | | | X | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | X | X | X | | | | X | | | X | X | X | | | | | | | | X | | | X | | | | | | | | | | | | | | | 5 | 5 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | X | X | X | X | X | | X | X | X | X | X | | | | X | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | X | | | X | | | | X | X | X | | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X |
CyberHarem/a2_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T10:00:56+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T11:53:37+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of a2/A2/A2/A2 (Nikke: Goddess of Victory) ================================================== This is the dataset of a2/A2/A2/A2 (Nikke: Goddess of Victory), containing 476 images and their tags. The core tags of this character are 'long\_hair, breasts, blue\_eyes, mole, mole\_under\_mouth, white\_hair, hair\_between\_eyes, medium\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
0e314ce2c5434fadccafda3513bea02350c606e0
# Dataset Card for "aligment-handbook-format-intel-orca-dpo-pairs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hunggggg/aligment-handbook-format-intel-orca-dpo-pairs
[ "region:us" ]
2024-01-24T10:03:59+00:00
{"dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 42576577.4042305, "num_examples": 11573}, {"name": "test", "num_bytes": 4731139.5957695, "num_examples": 1286}], "download_size": 24315227, "dataset_size": 47307717.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-24T10:11:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aligment-handbook-format-intel-orca-dpo-pairs" More Information needed
[ "# Dataset Card for \"aligment-handbook-format-intel-orca-dpo-pairs\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aligment-handbook-format-intel-orca-dpo-pairs\"\n\nMore Information needed" ]
351160a4157b27af281ff16b192b1b0205c1cae3
Certainly, here's the content that you can include in your README.md file for your dataset on Hugging Face: ```markdown # Twitch Streaming Video Dataset ## Overview The Twitch Streaming Video Dataset is a collection of streaming videos sourced from Twitch, one of the largest streaming platforms. It focuses on the "Talk Shows & Podcasts" category and includes content from two famous streamers: @HealthyGamer_GG, an addiction psychiatrist, and @Markiplier, a well-known influencer. ### Streamers and Channels - **@HealthyGamer_GG:** This streamer specializes in helping gamers overcome game addiction by discussing mental health issues. With approximately 500K subscriptions and 5M views, their channel provides valuable insights into addiction psychiatry. - **@Markiplier:** A famous influencer with over 2M subscriptions, @Markiplier shares life moments and thought-provoking perspectives in his streaming. His channel boasts a total of 12.8M views. ## Dataset Details - **Video Length:** The dataset comprises 65 streaming videos with a total length of 9128 minutes (approximately 152.1 hours). Each streaming video is approximately 2.34 hours long. - **Viewer Messages:** We crawled all the viewer messages from the chat boxes of the testing videos to evaluate the effect of viewer intentions on highlight extraction. - **Segment Information:** The dataset contains a total of 163 streamer discourse segments. These segments vary in duration, with a significant portion being less than one minute. This short segment length is intentional, as streamers often engage in dynamic and lively conversations. ## Usage You can use this dataset for various research purposes, including: - Highlight extraction from conversation-oriented streaming videos. - Analyzing viewer engagement and intentions in chat messages. - Exploring the impact of streamer behavior on viewer interaction. ## Citation If you use this dataset in your research, please cite our paper https://www.sciencedirect.com/science/article/pii/S0950705122010930 as the source.
samjaylin/COHETS
[ "region:us" ]
2024-01-24T10:20:10+00:00
{}
2024-01-24T10:26:25+00:00
[]
[]
TAGS #region-us
Certainly, here's the content that you can include in your URL file for your dataset on Hugging Face: '''markdown # Twitch Streaming Video Dataset ## Overview The Twitch Streaming Video Dataset is a collection of streaming videos sourced from Twitch, one of the largest streaming platforms. It focuses on the "Talk Shows & Podcasts" category and includes content from two famous streamers: @HealthyGamer_GG, an addiction psychiatrist, and @Markiplier, a well-known influencer. ### Streamers and Channels - @HealthyGamer_GG: This streamer specializes in helping gamers overcome game addiction by discussing mental health issues. With approximately 500K subscriptions and 5M views, their channel provides valuable insights into addiction psychiatry. - @Markiplier: A famous influencer with over 2M subscriptions, @Markiplier shares life moments and thought-provoking perspectives in his streaming. His channel boasts a total of 12.8M views. ## Dataset Details - Video Length: The dataset comprises 65 streaming videos with a total length of 9128 minutes (approximately 152.1 hours). Each streaming video is approximately 2.34 hours long. - Viewer Messages: We crawled all the viewer messages from the chat boxes of the testing videos to evaluate the effect of viewer intentions on highlight extraction. - Segment Information: The dataset contains a total of 163 streamer discourse segments. These segments vary in duration, with a significant portion being less than one minute. This short segment length is intentional, as streamers often engage in dynamic and lively conversations. ## Usage You can use this dataset for various research purposes, including: - Highlight extraction from conversation-oriented streaming videos. - Analyzing viewer engagement and intentions in chat messages. - Exploring the impact of streamer behavior on viewer interaction. If you use this dataset in your research, please cite our paper URL as the source.
[ "# Twitch Streaming Video Dataset", "## Overview\n\nThe Twitch Streaming Video Dataset is a collection of streaming videos sourced from Twitch, one of the largest streaming platforms. It focuses on the \"Talk Shows & Podcasts\" category and includes content from two famous streamers: @HealthyGamer_GG, an addiction psychiatrist, and @Markiplier, a well-known influencer.", "### Streamers and Channels\n\n- @HealthyGamer_GG: This streamer specializes in helping gamers overcome game addiction by discussing mental health issues. With approximately 500K subscriptions and 5M views, their channel provides valuable insights into addiction psychiatry.\n\n- @Markiplier: A famous influencer with over 2M subscriptions, @Markiplier shares life moments and thought-provoking perspectives in his streaming. His channel boasts a total of 12.8M views.", "## Dataset Details\n\n- Video Length: The dataset comprises 65 streaming videos with a total length of 9128 minutes (approximately 152.1 hours). Each streaming video is approximately 2.34 hours long.\n\n- Viewer Messages: We crawled all the viewer messages from the chat boxes of the testing videos to evaluate the effect of viewer intentions on highlight extraction.\n\n- Segment Information: The dataset contains a total of 163 streamer discourse segments. These segments vary in duration, with a significant portion being less than one minute. This short segment length is intentional, as streamers often engage in dynamic and lively conversations.", "## Usage\n\nYou can use this dataset for various research purposes, including:\n\n- Highlight extraction from conversation-oriented streaming videos.\n- Analyzing viewer engagement and intentions in chat messages.\n- Exploring the impact of streamer behavior on viewer interaction.\n\nIf you use this dataset in your research, please cite our paper URL as the source." ]
[ "TAGS\n#region-us \n", "# Twitch Streaming Video Dataset", "## Overview\n\nThe Twitch Streaming Video Dataset is a collection of streaming videos sourced from Twitch, one of the largest streaming platforms. It focuses on the \"Talk Shows & Podcasts\" category and includes content from two famous streamers: @HealthyGamer_GG, an addiction psychiatrist, and @Markiplier, a well-known influencer.", "### Streamers and Channels\n\n- @HealthyGamer_GG: This streamer specializes in helping gamers overcome game addiction by discussing mental health issues. With approximately 500K subscriptions and 5M views, their channel provides valuable insights into addiction psychiatry.\n\n- @Markiplier: A famous influencer with over 2M subscriptions, @Markiplier shares life moments and thought-provoking perspectives in his streaming. His channel boasts a total of 12.8M views.", "## Dataset Details\n\n- Video Length: The dataset comprises 65 streaming videos with a total length of 9128 minutes (approximately 152.1 hours). Each streaming video is approximately 2.34 hours long.\n\n- Viewer Messages: We crawled all the viewer messages from the chat boxes of the testing videos to evaluate the effect of viewer intentions on highlight extraction.\n\n- Segment Information: The dataset contains a total of 163 streamer discourse segments. These segments vary in duration, with a significant portion being less than one minute. This short segment length is intentional, as streamers often engage in dynamic and lively conversations.", "## Usage\n\nYou can use this dataset for various research purposes, including:\n\n- Highlight extraction from conversation-oriented streaming videos.\n- Analyzing viewer engagement and intentions in chat messages.\n- Exploring the impact of streamer behavior on viewer interaction.\n\nIf you use this dataset in your research, please cite our paper URL as the source." ]
3778c44024d882e686243fa31c95fd69f2a2ff43
# Dataset Card for Evaluation run of DangFutures/BIG_DANG_BOT <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [DangFutures/BIG_DANG_BOT](https://huggingface.co/DangFutures/BIG_DANG_BOT) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_DangFutures__BIG_DANG_BOT", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T10:23:33.414372](https://huggingface.co/datasets/open-llm-leaderboard/details_DangFutures__BIG_DANG_BOT/blob/main/results_2024-01-24T10-23-33.414372.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6873010049412217, "acc_stderr": 0.03039019909881743, "acc_norm": 0.700585054900533, "acc_norm_stderr": 0.03121364644444388, "mc1": 0.32558139534883723, "mc1_stderr": 0.016403989469907825, "mc2": 0.4907419803847836, "mc2_stderr": 0.014683278149160121 }, "harness|arc:challenge|25": { "acc": 0.5631399317406144, "acc_stderr": 0.01449442158425652, "acc_norm": 0.6032423208191127, "acc_norm_stderr": 0.014296513020180635 }, "harness|hellaswag|10": { "acc": 0.6421031666998606, "acc_stderr": 0.0047840184976798185, "acc_norm": 0.8201553475403306, "acc_norm_stderr": 0.00383273101759212 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6962962962962963, "acc_stderr": 0.03972552884785136, "acc_norm": 0.6962962962962963, "acc_norm_stderr": 0.03972552884785136 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7631578947368421, "acc_stderr": 0.03459777606810536, "acc_norm": 0.7631578947368421, "acc_norm_stderr": 0.03459777606810536 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7849056603773585, "acc_stderr": 0.02528839450289137, "acc_norm": 0.7849056603773585, "acc_norm_stderr": 0.02528839450289137 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8611111111111112, "acc_stderr": 0.028919802956134916, "acc_norm": 0.8611111111111112, "acc_norm_stderr": 0.028919802956134916 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6994219653179191, "acc_stderr": 0.0349610148119118, "acc_norm": 0.6994219653179191, "acc_norm_stderr": 0.0349610148119118 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287533, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287533 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6425531914893617, "acc_stderr": 0.031329417894764254, "acc_norm": 0.6425531914893617, "acc_norm_stderr": 0.031329417894764254 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.6228070175438597, "acc_stderr": 0.04559522141958216, "acc_norm": 0.6228070175438597, "acc_norm_stderr": 0.04559522141958216 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6689655172413793, "acc_stderr": 0.03921545312467122, "acc_norm": 0.6689655172413793, "acc_norm_stderr": 0.03921545312467122 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.46825396825396826, "acc_stderr": 0.025699352832131796, "acc_norm": 0.46825396825396826, "acc_norm_stderr": 0.025699352832131796 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5396825396825397, "acc_stderr": 0.04458029125470973, "acc_norm": 0.5396825396825397, "acc_norm_stderr": 0.04458029125470973 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.832258064516129, "acc_stderr": 0.021255464065371318, "acc_norm": 0.832258064516129, "acc_norm_stderr": 0.021255464065371318 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5714285714285714, "acc_stderr": 0.03481904844438804, "acc_norm": 0.5714285714285714, "acc_norm_stderr": 0.03481904844438804 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8, "acc_stderr": 0.031234752377721164, "acc_norm": 0.8, "acc_norm_stderr": 0.031234752377721164 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8636363636363636, "acc_stderr": 0.024450155973189835, "acc_norm": 0.8636363636363636, "acc_norm_stderr": 0.024450155973189835 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.917098445595855, "acc_stderr": 0.01989934131572178, "acc_norm": 0.917098445595855, "acc_norm_stderr": 0.01989934131572178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6948717948717948, "acc_stderr": 0.023346335293325887, "acc_norm": 0.6948717948717948, "acc_norm_stderr": 0.023346335293325887 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32592592592592595, "acc_stderr": 0.028578348365473072, "acc_norm": 0.32592592592592595, "acc_norm_stderr": 0.028578348365473072 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8025210084033614, "acc_stderr": 0.025859164122051453, "acc_norm": 0.8025210084033614, "acc_norm_stderr": 0.025859164122051453 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.40397350993377484, "acc_stderr": 0.040064856853653415, "acc_norm": 0.40397350993377484, "acc_norm_stderr": 0.040064856853653415 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8954128440366973, "acc_stderr": 0.013120530245265594, "acc_norm": 0.8954128440366973, "acc_norm_stderr": 0.013120530245265594 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5879629629629629, "acc_stderr": 0.03356787758160831, "acc_norm": 0.5879629629629629, "acc_norm_stderr": 0.03356787758160831 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8480392156862745, "acc_stderr": 0.025195658428931792, "acc_norm": 0.8480392156862745, "acc_norm_stderr": 0.025195658428931792 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8818565400843882, "acc_stderr": 0.021011052659878467, "acc_norm": 0.8818565400843882, "acc_norm_stderr": 0.021011052659878467 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7802690582959642, "acc_stderr": 0.027790177064383595, "acc_norm": 0.7802690582959642, "acc_norm_stderr": 0.027790177064383595 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.816793893129771, "acc_stderr": 0.03392770926494733, "acc_norm": 0.816793893129771, "acc_norm_stderr": 0.03392770926494733 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8512396694214877, "acc_stderr": 0.03248470083807194, "acc_norm": 0.8512396694214877, "acc_norm_stderr": 0.03248470083807194 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8425925925925926, "acc_stderr": 0.03520703990517963, "acc_norm": 0.8425925925925926, "acc_norm_stderr": 0.03520703990517963 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7791411042944786, "acc_stderr": 0.03259177392742179, "acc_norm": 0.7791411042944786, "acc_norm_stderr": 0.03259177392742179 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.6339285714285714, "acc_stderr": 0.04572372358737431, "acc_norm": 0.6339285714285714, "acc_norm_stderr": 0.04572372358737431 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.034926064766237906, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.034926064766237906 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9188034188034188, "acc_stderr": 0.017893784904018533, "acc_norm": 0.9188034188034188, "acc_norm_stderr": 0.017893784904018533 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768079, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8633461047254151, "acc_stderr": 0.012282876868629234, "acc_norm": 0.8633461047254151, "acc_norm_stderr": 0.012282876868629234 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7658959537572254, "acc_stderr": 0.022797110278071134, "acc_norm": 0.7658959537572254, "acc_norm_stderr": 0.022797110278071134 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4022346368715084, "acc_stderr": 0.016399716732847142, "acc_norm": 0.4022346368715084, "acc_norm_stderr": 0.016399716732847142 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7745098039215687, "acc_stderr": 0.023929155517351305, "acc_norm": 0.7745098039215687, "acc_norm_stderr": 0.023929155517351305 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7877813504823151, "acc_stderr": 0.023222756797435098, "acc_norm": 0.7877813504823151, "acc_norm_stderr": 0.023222756797435098 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8055555555555556, "acc_stderr": 0.022021366100220197, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.022021366100220197 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.549645390070922, "acc_stderr": 0.029680105565029043, "acc_norm": 0.549645390070922, "acc_norm_stderr": 0.029680105565029043 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.529986962190352, "acc_stderr": 0.012747248967079036, "acc_norm": 0.529986962190352, "acc_norm_stderr": 0.012747248967079036 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7720588235294118, "acc_stderr": 0.0254830814680298, "acc_norm": 0.7720588235294118, "acc_norm_stderr": 0.0254830814680298 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7761437908496732, "acc_stderr": 0.016863008585416613, "acc_norm": 0.7761437908496732, "acc_norm_stderr": 0.016863008585416613 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7836734693877551, "acc_stderr": 0.026358916334904028, "acc_norm": 0.7836734693877551, "acc_norm_stderr": 0.026358916334904028 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8756218905472637, "acc_stderr": 0.023335401790166327, "acc_norm": 0.8756218905472637, "acc_norm_stderr": 0.023335401790166327 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.89, "acc_stderr": 0.03144660377352203, "acc_norm": 0.89, "acc_norm_stderr": 0.03144660377352203 }, "harness|hendrycksTest-virology|5": { "acc": 0.5240963855421686, "acc_stderr": 0.03887971849597264, "acc_norm": 0.5240963855421686, "acc_norm_stderr": 0.03887971849597264 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8771929824561403, "acc_stderr": 0.02517298435015575, "acc_norm": 0.8771929824561403, "acc_norm_stderr": 0.02517298435015575 }, "harness|truthfulqa:mc|0": { "mc1": 0.32558139534883723, "mc1_stderr": 0.016403989469907825, "mc2": 0.4907419803847836, "mc2_stderr": 0.014683278149160121 }, "harness|winogrande|5": { "acc": 0.8089976322020521, "acc_stderr": 0.011047808761510423 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_DangFutures__BIG_DANG_BOT
[ "region:us" ]
2024-01-24T10:25:48+00:00
{"pretty_name": "Evaluation run of DangFutures/BIG_DANG_BOT", "dataset_summary": "Dataset automatically created during the evaluation run of model [DangFutures/BIG_DANG_BOT](https://huggingface.co/DangFutures/BIG_DANG_BOT) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_DangFutures__BIG_DANG_BOT\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T10:23:33.414372](https://huggingface.co/datasets/open-llm-leaderboard/details_DangFutures__BIG_DANG_BOT/blob/main/results_2024-01-24T10-23-33.414372.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6873010049412217,\n \"acc_stderr\": 0.03039019909881743,\n \"acc_norm\": 0.700585054900533,\n \"acc_norm_stderr\": 0.03121364644444388,\n \"mc1\": 0.32558139534883723,\n \"mc1_stderr\": 0.016403989469907825,\n \"mc2\": 0.4907419803847836,\n \"mc2_stderr\": 0.014683278149160121\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5631399317406144,\n \"acc_stderr\": 0.01449442158425652,\n \"acc_norm\": 0.6032423208191127,\n \"acc_norm_stderr\": 0.014296513020180635\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6421031666998606,\n \"acc_stderr\": 0.0047840184976798185,\n \"acc_norm\": 0.8201553475403306,\n \"acc_norm_stderr\": 0.00383273101759212\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6962962962962963,\n \"acc_stderr\": 0.03972552884785136,\n \"acc_norm\": 0.6962962962962963,\n \"acc_norm_stderr\": 0.03972552884785136\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7631578947368421,\n \"acc_stderr\": 0.03459777606810536,\n \"acc_norm\": 0.7631578947368421,\n \"acc_norm_stderr\": 0.03459777606810536\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7849056603773585,\n \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.7849056603773585,\n \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8611111111111112,\n \"acc_stderr\": 0.028919802956134916,\n \"acc_norm\": 0.8611111111111112,\n \"acc_norm_stderr\": 0.028919802956134916\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.0349610148119118,\n \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.0349610148119118\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287533,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287533\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6425531914893617,\n \"acc_stderr\": 0.031329417894764254,\n \"acc_norm\": 0.6425531914893617,\n \"acc_norm_stderr\": 0.031329417894764254\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6228070175438597,\n \"acc_stderr\": 0.04559522141958216,\n \"acc_norm\": 0.6228070175438597,\n \"acc_norm_stderr\": 0.04559522141958216\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6689655172413793,\n \"acc_stderr\": 0.03921545312467122,\n \"acc_norm\": 0.6689655172413793,\n \"acc_norm_stderr\": 0.03921545312467122\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.025699352832131796,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.025699352832131796\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.832258064516129,\n \"acc_stderr\": 0.021255464065371318,\n \"acc_norm\": 0.832258064516129,\n \"acc_norm_stderr\": 0.021255464065371318\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.03481904844438804,\n \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.03481904844438804\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.031234752377721164,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.031234752377721164\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8636363636363636,\n \"acc_stderr\": 0.024450155973189835,\n \"acc_norm\": 0.8636363636363636,\n \"acc_norm_stderr\": 0.024450155973189835\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.917098445595855,\n \"acc_stderr\": 0.01989934131572178,\n \"acc_norm\": 0.917098445595855,\n \"acc_norm_stderr\": 0.01989934131572178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6948717948717948,\n \"acc_stderr\": 0.023346335293325887,\n \"acc_norm\": 0.6948717948717948,\n \"acc_norm_stderr\": 0.023346335293325887\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.028578348365473072,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.028578348365473072\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8025210084033614,\n \"acc_stderr\": 0.025859164122051453,\n \"acc_norm\": 0.8025210084033614,\n \"acc_norm_stderr\": 0.025859164122051453\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.40397350993377484,\n \"acc_stderr\": 0.040064856853653415,\n \"acc_norm\": 0.40397350993377484,\n \"acc_norm_stderr\": 0.040064856853653415\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8954128440366973,\n \"acc_stderr\": 0.013120530245265594,\n \"acc_norm\": 0.8954128440366973,\n \"acc_norm_stderr\": 0.013120530245265594\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5879629629629629,\n \"acc_stderr\": 0.03356787758160831,\n \"acc_norm\": 0.5879629629629629,\n \"acc_norm_stderr\": 0.03356787758160831\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8818565400843882,\n \"acc_stderr\": 0.021011052659878467,\n \"acc_norm\": 0.8818565400843882,\n \"acc_norm_stderr\": 0.021011052659878467\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7802690582959642,\n \"acc_stderr\": 0.027790177064383595,\n \"acc_norm\": 0.7802690582959642,\n \"acc_norm_stderr\": 0.027790177064383595\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8512396694214877,\n \"acc_stderr\": 0.03248470083807194,\n \"acc_norm\": 0.8512396694214877,\n \"acc_norm_stderr\": 0.03248470083807194\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8425925925925926,\n \"acc_stderr\": 0.03520703990517963,\n \"acc_norm\": 0.8425925925925926,\n \"acc_norm_stderr\": 0.03520703990517963\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742179,\n \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742179\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6339285714285714,\n \"acc_stderr\": 0.04572372358737431,\n \"acc_norm\": 0.6339285714285714,\n \"acc_norm_stderr\": 0.04572372358737431\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.034926064766237906,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.034926064766237906\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9188034188034188,\n \"acc_stderr\": 0.017893784904018533,\n \"acc_norm\": 0.9188034188034188,\n \"acc_norm_stderr\": 0.017893784904018533\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8633461047254151,\n \"acc_stderr\": 0.012282876868629234,\n \"acc_norm\": 0.8633461047254151,\n \"acc_norm_stderr\": 0.012282876868629234\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7658959537572254,\n \"acc_stderr\": 0.022797110278071134,\n \"acc_norm\": 0.7658959537572254,\n \"acc_norm_stderr\": 0.022797110278071134\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4022346368715084,\n \"acc_stderr\": 0.016399716732847142,\n \"acc_norm\": 0.4022346368715084,\n \"acc_norm_stderr\": 0.016399716732847142\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.023929155517351305,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.023929155517351305\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7877813504823151,\n \"acc_stderr\": 0.023222756797435098,\n \"acc_norm\": 0.7877813504823151,\n \"acc_norm_stderr\": 0.023222756797435098\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.022021366100220197,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.022021366100220197\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.549645390070922,\n \"acc_stderr\": 0.029680105565029043,\n \"acc_norm\": 0.549645390070922,\n \"acc_norm_stderr\": 0.029680105565029043\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.529986962190352,\n \"acc_stderr\": 0.012747248967079036,\n \"acc_norm\": 0.529986962190352,\n \"acc_norm_stderr\": 0.012747248967079036\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7720588235294118,\n \"acc_stderr\": 0.0254830814680298,\n \"acc_norm\": 0.7720588235294118,\n \"acc_norm_stderr\": 0.0254830814680298\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7761437908496732,\n \"acc_stderr\": 0.016863008585416613,\n \"acc_norm\": 0.7761437908496732,\n \"acc_norm_stderr\": 0.016863008585416613\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7836734693877551,\n \"acc_stderr\": 0.026358916334904028,\n \"acc_norm\": 0.7836734693877551,\n \"acc_norm_stderr\": 0.026358916334904028\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n \"acc_stderr\": 0.023335401790166327,\n \"acc_norm\": 0.8756218905472637,\n \"acc_norm_stderr\": 0.023335401790166327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8771929824561403,\n \"acc_stderr\": 0.02517298435015575,\n \"acc_norm\": 0.8771929824561403,\n \"acc_norm_stderr\": 0.02517298435015575\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32558139534883723,\n \"mc1_stderr\": 0.016403989469907825,\n \"mc2\": 0.4907419803847836,\n \"mc2_stderr\": 0.014683278149160121\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8089976322020521,\n \"acc_stderr\": 0.011047808761510423\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/DangFutures/BIG_DANG_BOT", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|arc:challenge|25_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|gsm8k|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hellaswag|10_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T10-23-33.414372.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["**/details_harness|winogrande|5_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T10-23-33.414372.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T10_23_33.414372", "path": ["results_2024-01-24T10-23-33.414372.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T10-23-33.414372.parquet"]}]}]}
2024-01-24T10:26:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of DangFutures/BIG_DANG_BOT Dataset automatically created during the evaluation run of model DangFutures/BIG_DANG_BOT on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T10:23:33.414372(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of DangFutures/BIG_DANG_BOT\n\n\n\nDataset automatically created during the evaluation run of model DangFutures/BIG_DANG_BOT on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T10:23:33.414372(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of DangFutures/BIG_DANG_BOT\n\n\n\nDataset automatically created during the evaluation run of model DangFutures/BIG_DANG_BOT on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T10:23:33.414372(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
74c0b18ab503e9755b9218c927771773fa3beded
# Dataset of rosanna/イベント/罗珊娜/로산나 (Nikke: Goddess of Victory) This is the dataset of rosanna/イベント/罗珊娜/로산나 (Nikke: Goddess of Victory), containing 24 images and their tags. The core tags of this character are `red_eyes, black_hair, long_hair, multicolored_hair, breasts, white_hair, large_breasts, two-tone_hair, parted_bangs, earrings, mole`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 24 | 39.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rosanna_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 24 | 20.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rosanna_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 60 | 43.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rosanna_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 24 | 34.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rosanna_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 60 | 65.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rosanna_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/rosanna_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, looking_at_viewer, navel, solo, smile, tattoo, cleavage, necklace, black_gloves, fur_trim, thighhighs, blush, midriff, thighs, bracelet, choker, thigh_boots, virtual_youtuber, shorts | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | navel | solo | smile | tattoo | cleavage | necklace | black_gloves | fur_trim | thighhighs | blush | midriff | thighs | bracelet | choker | thigh_boots | virtual_youtuber | shorts | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------|:-------|:--------|:---------|:-----------|:-----------|:---------------|:-----------|:-------------|:--------|:----------|:---------|:-----------|:---------|:--------------|:-------------------|:---------| | 0 | 24 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/rosanna_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T10:29:04+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T11:41:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of rosanna/イベント/罗珊娜/로산나 (Nikke: Goddess of Victory) =========================================================== This is the dataset of rosanna/イベント/罗珊娜/로산나 (Nikke: Goddess of Victory), containing 24 images and their tags. The core tags of this character are 'red\_eyes, black\_hair, long\_hair, multicolored\_hair, breasts, white\_hair, large\_breasts, two-tone\_hair, parted\_bangs, earrings, mole', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
0fe26b0fe84a0ff1844e281f6280ff3a28cfda0e
# Dataset of belorta/贝洛塔/벨로타 (Nikke: Goddess of Victory) This is the dataset of belorta/贝洛塔/벨로타 (Nikke: Goddess of Victory), containing 21 images and their tags. The core tags of this character are `twintails, yellow_eyes, bangs, breasts, grey_hair, white_hair, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 21 | 25.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/belorta_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 21 | 13.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/belorta_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 50 | 32.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/belorta_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 21 | 21.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/belorta_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 50 | 46.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/belorta_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/belorta_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, blush, solo, black_gloves, black_thighhighs, fingerless_gloves, looking_at_viewer, pleated_skirt, smile, black_skirt, detached_sleeves, long_sleeves, sleeveless_shirt, belt, black_sleeves, closed_mouth, hair_between_eyes, red_shirt, brown_eyes, miniskirt, suspenders, tattoo | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | blush | solo | black_gloves | black_thighhighs | fingerless_gloves | looking_at_viewer | pleated_skirt | smile | black_skirt | detached_sleeves | long_sleeves | sleeveless_shirt | belt | black_sleeves | closed_mouth | hair_between_eyes | red_shirt | brown_eyes | miniskirt | suspenders | tattoo | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------|:-------|:---------------|:-------------------|:--------------------|:--------------------|:----------------|:--------|:--------------|:-------------------|:---------------|:-------------------|:-------|:----------------|:---------------|:--------------------|:------------|:-------------|:------------|:-------------|:---------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/belorta_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T10:40:52+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T10:45:58+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of belorta/贝洛塔/벨로타 (Nikke: Goddess of Victory) ====================================================== This is the dataset of belorta/贝洛塔/벨로타 (Nikke: Goddess of Victory), containing 21 images and their tags. The core tags of this character are 'twintails, yellow\_eyes, bangs, breasts, grey\_hair, white\_hair, large\_breasts', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
7b0fe7a1b42a62961cded6f41af1a8125d0df0a0
# Dataset Card for Evaluation run of stabilityai/stablelm-2-1_6b <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [stabilityai/stablelm-2-1_6b](https://huggingface.co/stabilityai/stablelm-2-1_6b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_stabilityai__stablelm-2-1_6b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T10:43:24.406547](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__stablelm-2-1_6b/blob/main/results_2024-01-24T10-43-24.406547.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.3923741043833077, "acc_stderr": 0.03405640954935936, "acc_norm": 0.3955514306541472, "acc_norm_stderr": 0.03480165961817428, "mc1": 0.22031823745410037, "mc1_stderr": 0.014509045171487283, "mc2": 0.36783858238841727, "mc2_stderr": 0.013915102083485486 }, "harness|arc:challenge|25": { "acc": 0.3967576791808874, "acc_stderr": 0.014296513020180632, "acc_norm": 0.4334470989761092, "acc_norm_stderr": 0.014481376224558896 }, "harness|hellaswag|10": { "acc": 0.5185222067317268, "acc_stderr": 0.004986356526063975, "acc_norm": 0.7045409281019717, "acc_norm_stderr": 0.004553164013379557 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04292596718256981, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04292596718256981 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.3157894736842105, "acc_stderr": 0.0378272898086547, "acc_norm": 0.3157894736842105, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.3169811320754717, "acc_stderr": 0.028637235639800935, "acc_norm": 0.3169811320754717, "acc_norm_stderr": 0.028637235639800935 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.4305555555555556, "acc_stderr": 0.04140685639111503, "acc_norm": 0.4305555555555556, "acc_norm_stderr": 0.04140685639111503 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.31213872832369943, "acc_stderr": 0.035331333893236574, "acc_norm": 0.31213872832369943, "acc_norm_stderr": 0.035331333893236574 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.1568627450980392, "acc_stderr": 0.03618664819936248, "acc_norm": 0.1568627450980392, "acc_norm_stderr": 0.03618664819936248 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.3276595744680851, "acc_stderr": 0.030683020843231004, "acc_norm": 0.3276595744680851, "acc_norm_stderr": 0.030683020843231004 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2631578947368421, "acc_stderr": 0.04142439719489362, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.04142439719489362 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.503448275862069, "acc_stderr": 0.04166567577101579, "acc_norm": 0.503448275862069, "acc_norm_stderr": 0.04166567577101579 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2698412698412698, "acc_stderr": 0.022860838309232072, "acc_norm": 0.2698412698412698, "acc_norm_stderr": 0.022860838309232072 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.24603174603174602, "acc_stderr": 0.038522733649243156, "acc_norm": 0.24603174603174602, "acc_norm_stderr": 0.038522733649243156 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.28, "acc_stderr": 0.045126085985421276, "acc_norm": 0.28, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.3870967741935484, "acc_stderr": 0.027709359675032495, "acc_norm": 0.3870967741935484, "acc_norm_stderr": 0.027709359675032495 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.24630541871921183, "acc_stderr": 0.030315099285617732, "acc_norm": 0.24630541871921183, "acc_norm_stderr": 0.030315099285617732 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.503030303030303, "acc_stderr": 0.03904272341431856, "acc_norm": 0.503030303030303, "acc_norm_stderr": 0.03904272341431856 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.5050505050505051, "acc_stderr": 0.035621707606254015, "acc_norm": 0.5050505050505051, "acc_norm_stderr": 0.035621707606254015 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.46113989637305697, "acc_stderr": 0.03597524411734579, "acc_norm": 0.46113989637305697, "acc_norm_stderr": 0.03597524411734579 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.32564102564102565, "acc_stderr": 0.02375966576741229, "acc_norm": 0.32564102564102565, "acc_norm_stderr": 0.02375966576741229 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26666666666666666, "acc_stderr": 0.02696242432507384, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.02696242432507384 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.3319327731092437, "acc_stderr": 0.030588697013783663, "acc_norm": 0.3319327731092437, "acc_norm_stderr": 0.030588697013783663 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.23178807947019867, "acc_stderr": 0.03445406271987054, "acc_norm": 0.23178807947019867, "acc_norm_stderr": 0.03445406271987054 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.5064220183486239, "acc_stderr": 0.021435554820013077, "acc_norm": 0.5064220183486239, "acc_norm_stderr": 0.021435554820013077 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.029157522184605603, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.029157522184605603 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.5, "acc_stderr": 0.03509312031717982, "acc_norm": 0.5, "acc_norm_stderr": 0.03509312031717982 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.4936708860759494, "acc_stderr": 0.03254462010767859, "acc_norm": 0.4936708860759494, "acc_norm_stderr": 0.03254462010767859 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5291479820627802, "acc_stderr": 0.03350073248773404, "acc_norm": 0.5291479820627802, "acc_norm_stderr": 0.03350073248773404 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.4961832061068702, "acc_stderr": 0.043851623256015534, "acc_norm": 0.4961832061068702, "acc_norm_stderr": 0.043851623256015534 }, "harness|hendrycksTest-international_law|5": { "acc": 0.4380165289256198, "acc_stderr": 0.04529146804435791, "acc_norm": 0.4380165289256198, "acc_norm_stderr": 0.04529146804435791 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04803752235190193, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04803752235190193 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.4110429447852761, "acc_stderr": 0.038656978537853624, "acc_norm": 0.4110429447852761, "acc_norm_stderr": 0.038656978537853624 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.32142857142857145, "acc_stderr": 0.0443280405529152, "acc_norm": 0.32142857142857145, "acc_norm_stderr": 0.0443280405529152 }, "harness|hendrycksTest-management|5": { "acc": 0.42718446601941745, "acc_stderr": 0.04897957737781168, "acc_norm": 0.42718446601941745, "acc_norm_stderr": 0.04897957737781168 }, "harness|hendrycksTest-marketing|5": { "acc": 0.5811965811965812, "acc_stderr": 0.03232128912157792, "acc_norm": 0.5811965811965812, "acc_norm_stderr": 0.03232128912157792 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.42, "acc_stderr": 0.04960449637488584, "acc_norm": 0.42, "acc_norm_stderr": 0.04960449637488584 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.545338441890166, "acc_stderr": 0.0178063045850526, "acc_norm": 0.545338441890166, "acc_norm_stderr": 0.0178063045850526 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.45375722543352603, "acc_stderr": 0.026803720583206177, "acc_norm": 0.45375722543352603, "acc_norm_stderr": 0.026803720583206177 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.38235294117647056, "acc_stderr": 0.027826109307283686, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.027826109307283686 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.4180064308681672, "acc_stderr": 0.028013651891995072, "acc_norm": 0.4180064308681672, "acc_norm_stderr": 0.028013651891995072 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.4567901234567901, "acc_stderr": 0.027716661650194038, "acc_norm": 0.4567901234567901, "acc_norm_stderr": 0.027716661650194038 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.29432624113475175, "acc_stderr": 0.027187127011503796, "acc_norm": 0.29432624113475175, "acc_norm_stderr": 0.027187127011503796 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2861799217731421, "acc_stderr": 0.011543642878150757, "acc_norm": 0.2861799217731421, "acc_norm_stderr": 0.011543642878150757 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.2647058823529412, "acc_stderr": 0.026799562024887674, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.026799562024887674 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.38235294117647056, "acc_stderr": 0.019659922493623336, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.019659922493623336 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.45454545454545453, "acc_stderr": 0.04769300568972744, "acc_norm": 0.45454545454545453, "acc_norm_stderr": 0.04769300568972744 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.32653061224489793, "acc_stderr": 0.030021056238440313, "acc_norm": 0.32653061224489793, "acc_norm_stderr": 0.030021056238440313 }, "harness|hendrycksTest-sociology|5": { "acc": 0.472636815920398, "acc_stderr": 0.03530235517334682, "acc_norm": 0.472636815920398, "acc_norm_stderr": 0.03530235517334682 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.65, "acc_stderr": 0.047937248544110196, "acc_norm": 0.65, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-virology|5": { "acc": 0.39759036144578314, "acc_stderr": 0.038099730845402184, "acc_norm": 0.39759036144578314, "acc_norm_stderr": 0.038099730845402184 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.5614035087719298, "acc_stderr": 0.0380579750559046, "acc_norm": 0.5614035087719298, "acc_norm_stderr": 0.0380579750559046 }, "harness|truthfulqa:mc|0": { "mc1": 0.22031823745410037, "mc1_stderr": 0.014509045171487283, "mc2": 0.36783858238841727, "mc2_stderr": 0.013915102083485486 }, "harness|winogrande|5": { "acc": 0.6456195737963694, "acc_stderr": 0.013443314368356092 }, "harness|gsm8k|5": { "acc": 0.17437452615617893, "acc_stderr": 0.010451421361976233 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_stabilityai__stablelm-2-1_6b
[ "region:us" ]
2024-01-24T10:45:34+00:00
{"pretty_name": "Evaluation run of stabilityai/stablelm-2-1_6b", "dataset_summary": "Dataset automatically created during the evaluation run of model [stabilityai/stablelm-2-1_6b](https://huggingface.co/stabilityai/stablelm-2-1_6b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_stabilityai__stablelm-2-1_6b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T10:43:24.406547](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__stablelm-2-1_6b/blob/main/results_2024-01-24T10-43-24.406547.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3923741043833077,\n \"acc_stderr\": 0.03405640954935936,\n \"acc_norm\": 0.3955514306541472,\n \"acc_norm_stderr\": 0.03480165961817428,\n \"mc1\": 0.22031823745410037,\n \"mc1_stderr\": 0.014509045171487283,\n \"mc2\": 0.36783858238841727,\n \"mc2_stderr\": 0.013915102083485486\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3967576791808874,\n \"acc_stderr\": 0.014296513020180632,\n \"acc_norm\": 0.4334470989761092,\n \"acc_norm_stderr\": 0.014481376224558896\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5185222067317268,\n \"acc_stderr\": 0.004986356526063975,\n \"acc_norm\": 0.7045409281019717,\n \"acc_norm_stderr\": 0.004553164013379557\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.3169811320754717,\n \"acc_stderr\": 0.028637235639800935,\n \"acc_norm\": 0.3169811320754717,\n \"acc_norm_stderr\": 0.028637235639800935\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4305555555555556,\n \"acc_stderr\": 0.04140685639111503,\n \"acc_norm\": 0.4305555555555556,\n \"acc_norm_stderr\": 0.04140685639111503\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.31213872832369943,\n \"acc_stderr\": 0.035331333893236574,\n \"acc_norm\": 0.31213872832369943,\n \"acc_norm_stderr\": 0.035331333893236574\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.1568627450980392,\n \"acc_stderr\": 0.03618664819936248,\n \"acc_norm\": 0.1568627450980392,\n \"acc_norm_stderr\": 0.03618664819936248\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.3276595744680851,\n \"acc_stderr\": 0.030683020843231004,\n \"acc_norm\": 0.3276595744680851,\n \"acc_norm_stderr\": 0.030683020843231004\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489362,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489362\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.022860838309232072,\n \"acc_norm\": 0.2698412698412698,\n \"acc_norm_stderr\": 0.022860838309232072\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.038522733649243156,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.038522733649243156\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3870967741935484,\n \"acc_stderr\": 0.027709359675032495,\n \"acc_norm\": 0.3870967741935484,\n \"acc_norm_stderr\": 0.027709359675032495\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.24630541871921183,\n \"acc_stderr\": 0.030315099285617732,\n \"acc_norm\": 0.24630541871921183,\n \"acc_norm_stderr\": 0.030315099285617732\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.503030303030303,\n \"acc_stderr\": 0.03904272341431856,\n \"acc_norm\": 0.503030303030303,\n \"acc_norm_stderr\": 0.03904272341431856\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5050505050505051,\n \"acc_stderr\": 0.035621707606254015,\n \"acc_norm\": 0.5050505050505051,\n \"acc_norm_stderr\": 0.035621707606254015\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.46113989637305697,\n \"acc_stderr\": 0.03597524411734579,\n \"acc_norm\": 0.46113989637305697,\n \"acc_norm_stderr\": 0.03597524411734579\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.32564102564102565,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.32564102564102565,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.02696242432507384,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.02696242432507384\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.3319327731092437,\n \"acc_stderr\": 0.030588697013783663,\n \"acc_norm\": 0.3319327731092437,\n \"acc_norm_stderr\": 0.030588697013783663\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.23178807947019867,\n \"acc_stderr\": 0.03445406271987054,\n \"acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.03445406271987054\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5064220183486239,\n \"acc_stderr\": 0.021435554820013077,\n \"acc_norm\": 0.5064220183486239,\n \"acc_norm_stderr\": 0.021435554820013077\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.029157522184605603,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.029157522184605603\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.03509312031717982,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.03509312031717982\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.4936708860759494,\n \"acc_stderr\": 0.03254462010767859,\n \"acc_norm\": 0.4936708860759494,\n \"acc_norm_stderr\": 0.03254462010767859\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5291479820627802,\n \"acc_stderr\": 0.03350073248773404,\n \"acc_norm\": 0.5291479820627802,\n \"acc_norm_stderr\": 0.03350073248773404\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.4961832061068702,\n \"acc_stderr\": 0.043851623256015534,\n \"acc_norm\": 0.4961832061068702,\n \"acc_norm_stderr\": 0.043851623256015534\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.4380165289256198,\n \"acc_stderr\": 0.04529146804435791,\n \"acc_norm\": 0.4380165289256198,\n \"acc_norm_stderr\": 0.04529146804435791\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04803752235190193,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04803752235190193\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.4110429447852761,\n \"acc_stderr\": 0.038656978537853624,\n \"acc_norm\": 0.4110429447852761,\n \"acc_norm_stderr\": 0.038656978537853624\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n \"acc_stderr\": 0.0443280405529152,\n \"acc_norm\": 0.32142857142857145,\n \"acc_norm_stderr\": 0.0443280405529152\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.42718446601941745,\n \"acc_stderr\": 0.04897957737781168,\n \"acc_norm\": 0.42718446601941745,\n \"acc_norm_stderr\": 0.04897957737781168\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.5811965811965812,\n \"acc_stderr\": 0.03232128912157792,\n \"acc_norm\": 0.5811965811965812,\n \"acc_norm_stderr\": 0.03232128912157792\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.545338441890166,\n \"acc_stderr\": 0.0178063045850526,\n \"acc_norm\": 0.545338441890166,\n \"acc_norm_stderr\": 0.0178063045850526\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.45375722543352603,\n \"acc_stderr\": 0.026803720583206177,\n \"acc_norm\": 0.45375722543352603,\n \"acc_norm_stderr\": 0.026803720583206177\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.027826109307283686,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.027826109307283686\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4180064308681672,\n \"acc_stderr\": 0.028013651891995072,\n \"acc_norm\": 0.4180064308681672,\n \"acc_norm_stderr\": 0.028013651891995072\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.4567901234567901,\n \"acc_stderr\": 0.027716661650194038,\n \"acc_norm\": 0.4567901234567901,\n \"acc_norm_stderr\": 0.027716661650194038\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.29432624113475175,\n \"acc_stderr\": 0.027187127011503796,\n \"acc_norm\": 0.29432624113475175,\n \"acc_norm_stderr\": 0.027187127011503796\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2861799217731421,\n \"acc_stderr\": 0.011543642878150757,\n \"acc_norm\": 0.2861799217731421,\n \"acc_norm_stderr\": 0.011543642878150757\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.026799562024887674,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.026799562024887674\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.019659922493623336,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.019659922493623336\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.45454545454545453,\n \"acc_stderr\": 0.04769300568972744,\n \"acc_norm\": 0.45454545454545453,\n \"acc_norm_stderr\": 0.04769300568972744\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.32653061224489793,\n \"acc_stderr\": 0.030021056238440313,\n \"acc_norm\": 0.32653061224489793,\n \"acc_norm_stderr\": 0.030021056238440313\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.472636815920398,\n \"acc_stderr\": 0.03530235517334682,\n \"acc_norm\": 0.472636815920398,\n \"acc_norm_stderr\": 0.03530235517334682\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.39759036144578314,\n \"acc_stderr\": 0.038099730845402184,\n \"acc_norm\": 0.39759036144578314,\n \"acc_norm_stderr\": 0.038099730845402184\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.5614035087719298,\n \"acc_stderr\": 0.0380579750559046,\n \"acc_norm\": 0.5614035087719298,\n \"acc_norm_stderr\": 0.0380579750559046\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.22031823745410037,\n \"mc1_stderr\": 0.014509045171487283,\n \"mc2\": 0.36783858238841727,\n \"mc2_stderr\": 0.013915102083485486\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6456195737963694,\n \"acc_stderr\": 0.013443314368356092\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17437452615617893,\n \"acc_stderr\": 0.010451421361976233\n }\n}\n```", "repo_url": "https://huggingface.co/stabilityai/stablelm-2-1_6b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|arc:challenge|25_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|gsm8k|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hellaswag|10_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T10-43-24.406547.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["**/details_harness|winogrande|5_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T10-43-24.406547.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T10_43_24.406547", "path": ["results_2024-01-24T10-43-24.406547.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T10-43-24.406547.parquet"]}]}]}
2024-01-24T10:46:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of stabilityai/stablelm-2-1_6b Dataset automatically created during the evaluation run of model stabilityai/stablelm-2-1_6b on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T10:43:24.406547(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of stabilityai/stablelm-2-1_6b\n\n\n\nDataset automatically created during the evaluation run of model stabilityai/stablelm-2-1_6b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T10:43:24.406547(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of stabilityai/stablelm-2-1_6b\n\n\n\nDataset automatically created during the evaluation run of model stabilityai/stablelm-2-1_6b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T10:43:24.406547(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
355566b0df3b430f7058fa5f8cab49221aabad1e
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-7bx8-v17.1-32k", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T10:48:34.323714](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-7bx8-v17.1-32k/blob/main/results_2024-01-24T10-48-34.323714.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6959391076316241, "acc_stderr": 0.03060969376377179, "acc_norm": 0.7004357595579067, "acc_norm_stderr": 0.031196750430346164, "mc1": 0.2717258261933905, "mc1_stderr": 0.01557284045287583, "mc2": 0.42136704852768436, "mc2_stderr": 0.014220364088765812 }, "harness|arc:challenge|25": { "acc": 0.6168941979522184, "acc_stderr": 0.014206472661672876, "acc_norm": 0.6552901023890785, "acc_norm_stderr": 0.01388881628678211 }, "harness|hellaswag|10": { "acc": 0.5769766978689504, "acc_stderr": 0.004930293787545623, "acc_norm": 0.7595100577574189, "acc_norm_stderr": 0.0042650756644566235 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6518518518518519, "acc_stderr": 0.041153246103369526, "acc_norm": 0.6518518518518519, "acc_norm_stderr": 0.041153246103369526 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7828947368421053, "acc_stderr": 0.03355045304882924, "acc_norm": 0.7828947368421053, "acc_norm_stderr": 0.03355045304882924 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7886792452830189, "acc_stderr": 0.02512576648482785, "acc_norm": 0.7886792452830189, "acc_norm_stderr": 0.02512576648482785 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8263888888888888, "acc_stderr": 0.03167473383795718, "acc_norm": 0.8263888888888888, "acc_norm_stderr": 0.03167473383795718 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.64, "acc_stderr": 0.04824181513244218, "acc_norm": 0.64, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.43, "acc_stderr": 0.04975698519562428, "acc_norm": 0.43, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.04951218252396264, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.04951218252396264 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.78, "acc_stderr": 0.04163331998932263, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932263 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.676595744680851, "acc_stderr": 0.030579442773610337, "acc_norm": 0.676595744680851, "acc_norm_stderr": 0.030579442773610337 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5614035087719298, "acc_stderr": 0.04668000738510455, "acc_norm": 0.5614035087719298, "acc_norm_stderr": 0.04668000738510455 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6344827586206897, "acc_stderr": 0.04013124195424386, "acc_norm": 0.6344827586206897, "acc_norm_stderr": 0.04013124195424386 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4576719576719577, "acc_stderr": 0.02565886886205833, "acc_norm": 0.4576719576719577, "acc_norm_stderr": 0.02565886886205833 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5, "acc_stderr": 0.04472135954999579, "acc_norm": 0.5, "acc_norm_stderr": 0.04472135954999579 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8548387096774194, "acc_stderr": 0.020039563628053286, "acc_norm": 0.8548387096774194, "acc_norm_stderr": 0.020039563628053286 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5665024630541872, "acc_stderr": 0.03486731727419872, "acc_norm": 0.5665024630541872, "acc_norm_stderr": 0.03486731727419872 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8242424242424242, "acc_stderr": 0.02972094300622445, "acc_norm": 0.8242424242424242, "acc_norm_stderr": 0.02972094300622445 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8535353535353535, "acc_stderr": 0.025190921114603915, "acc_norm": 0.8535353535353535, "acc_norm_stderr": 0.025190921114603915 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9222797927461139, "acc_stderr": 0.01932180555722315, "acc_norm": 0.9222797927461139, "acc_norm_stderr": 0.01932180555722315 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6871794871794872, "acc_stderr": 0.023507579020645358, "acc_norm": 0.6871794871794872, "acc_norm_stderr": 0.023507579020645358 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3296296296296296, "acc_stderr": 0.028661201116524593, "acc_norm": 0.3296296296296296, "acc_norm_stderr": 0.028661201116524593 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8067226890756303, "acc_stderr": 0.02564947026588919, "acc_norm": 0.8067226890756303, "acc_norm_stderr": 0.02564947026588919 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4370860927152318, "acc_stderr": 0.04050035722230636, "acc_norm": 0.4370860927152318, "acc_norm_stderr": 0.04050035722230636 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8715596330275229, "acc_stderr": 0.014344977542914316, "acc_norm": 0.8715596330275229, "acc_norm_stderr": 0.014344977542914316 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6064814814814815, "acc_stderr": 0.03331747876370312, "acc_norm": 0.6064814814814815, "acc_norm_stderr": 0.03331747876370312 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8676470588235294, "acc_stderr": 0.023784297520918853, "acc_norm": 0.8676470588235294, "acc_norm_stderr": 0.023784297520918853 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8818565400843882, "acc_stderr": 0.021011052659878463, "acc_norm": 0.8818565400843882, "acc_norm_stderr": 0.021011052659878463 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7892376681614349, "acc_stderr": 0.02737309550054019, "acc_norm": 0.7892376681614349, "acc_norm_stderr": 0.02737309550054019 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.03547771004159463, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.03547771004159463 }, "harness|hendrycksTest-international_law|5": { "acc": 0.859504132231405, "acc_stderr": 0.03172233426002158, "acc_norm": 0.859504132231405, "acc_norm_stderr": 0.03172233426002158 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8240740740740741, "acc_stderr": 0.03680918141673881, "acc_norm": 0.8240740740740741, "acc_norm_stderr": 0.03680918141673881 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7791411042944786, "acc_stderr": 0.03259177392742179, "acc_norm": 0.7791411042944786, "acc_norm_stderr": 0.03259177392742179 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5714285714285714, "acc_stderr": 0.04697113923010213, "acc_norm": 0.5714285714285714, "acc_norm_stderr": 0.04697113923010213 }, "harness|hendrycksTest-management|5": { "acc": 0.8349514563106796, "acc_stderr": 0.036756688322331886, "acc_norm": 0.8349514563106796, "acc_norm_stderr": 0.036756688322331886 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9188034188034188, "acc_stderr": 0.01789378490401853, "acc_norm": 0.9188034188034188, "acc_norm_stderr": 0.01789378490401853 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.77, "acc_stderr": 0.042295258468165065, "acc_norm": 0.77, "acc_norm_stderr": 0.042295258468165065 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8748403575989783, "acc_stderr": 0.011832954239305738, "acc_norm": 0.8748403575989783, "acc_norm_stderr": 0.011832954239305738 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7601156069364162, "acc_stderr": 0.02298959254312356, "acc_norm": 0.7601156069364162, "acc_norm_stderr": 0.02298959254312356 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4670391061452514, "acc_stderr": 0.016686126653013934, "acc_norm": 0.4670391061452514, "acc_norm_stderr": 0.016686126653013934 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7810457516339869, "acc_stderr": 0.02367908986180772, "acc_norm": 0.7810457516339869, "acc_norm_stderr": 0.02367908986180772 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.797427652733119, "acc_stderr": 0.022827317491059686, "acc_norm": 0.797427652733119, "acc_norm_stderr": 0.022827317491059686 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.808641975308642, "acc_stderr": 0.021887704613396154, "acc_norm": 0.808641975308642, "acc_norm_stderr": 0.021887704613396154 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48936170212765956, "acc_stderr": 0.029820747191422487, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.029820747191422487 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5130378096479792, "acc_stderr": 0.012765893883835332, "acc_norm": 0.5130378096479792, "acc_norm_stderr": 0.012765893883835332 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7757352941176471, "acc_stderr": 0.025336848563332376, "acc_norm": 0.7757352941176471, "acc_norm_stderr": 0.025336848563332376 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7467320261437909, "acc_stderr": 0.01759348689536683, "acc_norm": 0.7467320261437909, "acc_norm_stderr": 0.01759348689536683 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7, "acc_stderr": 0.04389311454644287, "acc_norm": 0.7, "acc_norm_stderr": 0.04389311454644287 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8040816326530612, "acc_stderr": 0.025409301953225678, "acc_norm": 0.8040816326530612, "acc_norm_stderr": 0.025409301953225678 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8805970149253731, "acc_stderr": 0.02292879327721974, "acc_norm": 0.8805970149253731, "acc_norm_stderr": 0.02292879327721974 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.9, "acc_stderr": 0.030151134457776348, "acc_norm": 0.9, "acc_norm_stderr": 0.030151134457776348 }, "harness|hendrycksTest-virology|5": { "acc": 0.5421686746987951, "acc_stderr": 0.0387862677100236, "acc_norm": 0.5421686746987951, "acc_norm_stderr": 0.0387862677100236 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8538011695906432, "acc_stderr": 0.02709729011807082, "acc_norm": 0.8538011695906432, "acc_norm_stderr": 0.02709729011807082 }, "harness|truthfulqa:mc|0": { "mc1": 0.2717258261933905, "mc1_stderr": 0.01557284045287583, "mc2": 0.42136704852768436, "mc2_stderr": 0.014220364088765812 }, "harness|winogrande|5": { "acc": 0.7569060773480663, "acc_stderr": 0.012055665630431036 }, "harness|gsm8k|5": { "acc": 0.5905989385898408, "acc_stderr": 0.013544504071244513 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-7bx8-v17.1-32k
[ "region:us" ]
2024-01-24T10:50:54+00:00
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-7bx8-v17.1-32k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T10:48:34.323714](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-7bx8-v17.1-32k/blob/main/results_2024-01-24T10-48-34.323714.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6959391076316241,\n \"acc_stderr\": 0.03060969376377179,\n \"acc_norm\": 0.7004357595579067,\n \"acc_norm_stderr\": 0.031196750430346164,\n \"mc1\": 0.2717258261933905,\n \"mc1_stderr\": 0.01557284045287583,\n \"mc2\": 0.42136704852768436,\n \"mc2_stderr\": 0.014220364088765812\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6168941979522184,\n \"acc_stderr\": 0.014206472661672876,\n \"acc_norm\": 0.6552901023890785,\n \"acc_norm_stderr\": 0.01388881628678211\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5769766978689504,\n \"acc_stderr\": 0.004930293787545623,\n \"acc_norm\": 0.7595100577574189,\n \"acc_norm_stderr\": 0.0042650756644566235\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.03355045304882924,\n \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.03355045304882924\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7886792452830189,\n \"acc_stderr\": 0.02512576648482785,\n \"acc_norm\": 0.7886792452830189,\n \"acc_norm_stderr\": 0.02512576648482785\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n \"acc_stderr\": 0.03167473383795718,\n \"acc_norm\": 0.8263888888888888,\n \"acc_norm_stderr\": 0.03167473383795718\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.04951218252396264,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.04951218252396264\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.676595744680851,\n \"acc_stderr\": 0.030579442773610337,\n \"acc_norm\": 0.676595744680851,\n \"acc_norm_stderr\": 0.030579442773610337\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5614035087719298,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.5614035087719298,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.04013124195424386,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.04013124195424386\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4576719576719577,\n \"acc_stderr\": 0.02565886886205833,\n \"acc_norm\": 0.4576719576719577,\n \"acc_norm_stderr\": 0.02565886886205833\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8548387096774194,\n \"acc_stderr\": 0.020039563628053286,\n \"acc_norm\": 0.8548387096774194,\n \"acc_norm_stderr\": 0.020039563628053286\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5665024630541872,\n \"acc_stderr\": 0.03486731727419872,\n \"acc_norm\": 0.5665024630541872,\n \"acc_norm_stderr\": 0.03486731727419872\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603915,\n \"acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.01932180555722315,\n \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.01932180555722315\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6871794871794872,\n \"acc_stderr\": 0.023507579020645358,\n \"acc_norm\": 0.6871794871794872,\n \"acc_norm_stderr\": 0.023507579020645358\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524593,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524593\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8067226890756303,\n \"acc_stderr\": 0.02564947026588919,\n \"acc_norm\": 0.8067226890756303,\n \"acc_norm_stderr\": 0.02564947026588919\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4370860927152318,\n \"acc_stderr\": 0.04050035722230636,\n \"acc_norm\": 0.4370860927152318,\n \"acc_norm_stderr\": 0.04050035722230636\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8715596330275229,\n \"acc_stderr\": 0.014344977542914316,\n \"acc_norm\": 0.8715596330275229,\n \"acc_norm_stderr\": 0.014344977542914316\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8676470588235294,\n \"acc_stderr\": 0.023784297520918853,\n \"acc_norm\": 0.8676470588235294,\n \"acc_norm_stderr\": 0.023784297520918853\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8818565400843882,\n \"acc_stderr\": 0.021011052659878463,\n \"acc_norm\": 0.8818565400843882,\n \"acc_norm_stderr\": 0.021011052659878463\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n \"acc_stderr\": 0.02737309550054019,\n \"acc_norm\": 0.7892376681614349,\n \"acc_norm_stderr\": 0.02737309550054019\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.859504132231405,\n \"acc_stderr\": 0.03172233426002158,\n \"acc_norm\": 0.859504132231405,\n \"acc_norm_stderr\": 0.03172233426002158\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.03680918141673881,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.03680918141673881\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742179,\n \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742179\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9188034188034188,\n \"acc_stderr\": 0.01789378490401853,\n \"acc_norm\": 0.9188034188034188,\n \"acc_norm_stderr\": 0.01789378490401853\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8748403575989783,\n \"acc_stderr\": 0.011832954239305738,\n \"acc_norm\": 0.8748403575989783,\n \"acc_norm_stderr\": 0.011832954239305738\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7601156069364162,\n \"acc_stderr\": 0.02298959254312356,\n \"acc_norm\": 0.7601156069364162,\n \"acc_norm_stderr\": 0.02298959254312356\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4670391061452514,\n \"acc_stderr\": 0.016686126653013934,\n \"acc_norm\": 0.4670391061452514,\n \"acc_norm_stderr\": 0.016686126653013934\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7810457516339869,\n \"acc_stderr\": 0.02367908986180772,\n \"acc_norm\": 0.7810457516339869,\n \"acc_norm_stderr\": 0.02367908986180772\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.797427652733119,\n \"acc_stderr\": 0.022827317491059686,\n \"acc_norm\": 0.797427652733119,\n \"acc_norm_stderr\": 0.022827317491059686\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.808641975308642,\n \"acc_stderr\": 0.021887704613396154,\n \"acc_norm\": 0.808641975308642,\n \"acc_norm_stderr\": 0.021887704613396154\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422487,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422487\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5130378096479792,\n \"acc_stderr\": 0.012765893883835332,\n \"acc_norm\": 0.5130378096479792,\n \"acc_norm_stderr\": 0.012765893883835332\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7757352941176471,\n \"acc_stderr\": 0.025336848563332376,\n \"acc_norm\": 0.7757352941176471,\n \"acc_norm_stderr\": 0.025336848563332376\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7467320261437909,\n \"acc_stderr\": 0.01759348689536683,\n \"acc_norm\": 0.7467320261437909,\n \"acc_norm_stderr\": 0.01759348689536683\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8040816326530612,\n \"acc_stderr\": 0.025409301953225678,\n \"acc_norm\": 0.8040816326530612,\n \"acc_norm_stderr\": 0.025409301953225678\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776348,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776348\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.02709729011807082,\n \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.02709729011807082\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2717258261933905,\n \"mc1_stderr\": 0.01557284045287583,\n \"mc2\": 0.42136704852768436,\n \"mc2_stderr\": 0.014220364088765812\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7569060773480663,\n \"acc_stderr\": 0.012055665630431036\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5905989385898408,\n \"acc_stderr\": 0.013544504071244513\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|arc:challenge|25_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|gsm8k|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hellaswag|10_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T10-48-34.323714.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["**/details_harness|winogrande|5_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T10-48-34.323714.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T10_48_34.323714", "path": ["results_2024-01-24T10-48-34.323714.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T10-48-34.323714.parquet"]}]}]}
2024-01-24T10:51:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T10:48:34.323714(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T10:48:34.323714(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-7bx8-v17.1-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T10:48:34.323714(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
b72d8008682946ff1f84b1717d461cbf020ccfa6
A collection of raw images from DIV2K, Flicker2K and OST datasets. Please refer [here](https://github.com/XPixelGroup/BasicSR/blob/master/docs/DatasetPreparation.md) for details. ## Citation ```bibtex @inproceedings{agustsson2017ntire, title={Ntire 2017 challenge on single image super-resolution: Dataset and study}, author={Agustsson, Eirikur and Timofte, Radu}, booktitle={CVPRW}, year={2017} } @InProceedings{Lim_2017_CVPR_Workshops, author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu}, title = {Enhanced Deep Residual Networks for Single Image Super-Resolution}, booktitle = {CVPRW}, year = {2017} } @inproceedings{wang2018recovering, title={Recovering realistic texture in image super-resolution by deep spatial feature transform}, author={Wang, Xintao and Yu, Ke and Dong, Chao and Loy, Chen Change}, booktitle={CVPR}, year={2018} } ```
Iceclear/DF2K-OST
[ "task_categories:image-to-image", "license:apache-2.0", "region:us" ]
2024-01-24T10:53:31+00:00
{"license": "apache-2.0", "task_categories": ["image-to-image"]}
2024-02-01T06:51:25+00:00
[]
[]
TAGS #task_categories-image-to-image #license-apache-2.0 #region-us
A collection of raw images from DIV2K, Flicker2K and OST datasets. Please refer here for details.
[]
[ "TAGS\n#task_categories-image-to-image #license-apache-2.0 #region-us \n" ]
3cabf0ad82fbcfb67bd20c0a92020ee61448c471
# Dataset Card for "exp" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
thiomajid/exp
[ "region:us" ]
2024-01-24T10:59:46+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "apologetic", "num_bytes": 614448, "num_examples": 558}, {"name": "non_apologetic", "num_bytes": 9441420, "num_examples": 5047}], "download_size": 4301960, "dataset_size": 10055868}}
2024-01-27T14:48:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "exp" More Information needed
[ "# Dataset Card for \"exp\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"exp\"\n\nMore Information needed" ]
39c1778ddb55bea53007ba09eb3f297818672986
# Dataset Card for Evaluation run of JaeyeonKang/CCK_gony <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [JaeyeonKang/CCK_gony](https://huggingface.co/JaeyeonKang/CCK_gony) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_JaeyeonKang__CCK_gony", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T11:01:11.626042](https://huggingface.co/datasets/open-llm-leaderboard/details_JaeyeonKang__CCK_gony/blob/main/results_2024-01-24T11-01-11.626042.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6924973006675793, "acc_stderr": 0.030756010778234835, "acc_norm": 0.6971433352879666, "acc_norm_stderr": 0.031350039645753676, "mc1": 0.41615667074663404, "mc1_stderr": 0.017255657502903043, "mc2": 0.5674014937456792, "mc2_stderr": 0.015092873937221477 }, "harness|arc:challenge|25": { "acc": 0.6382252559726962, "acc_stderr": 0.014041957945038082, "acc_norm": 0.6911262798634812, "acc_norm_stderr": 0.013501770929344 }, "harness|hellaswag|10": { "acc": 0.6756622186815375, "acc_stderr": 0.004671701705567238, "acc_norm": 0.867755427205736, "acc_norm_stderr": 0.003380641470989921 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.41, "acc_stderr": 0.04943110704237103, "acc_norm": 0.41, "acc_norm_stderr": 0.04943110704237103 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6444444444444445, "acc_stderr": 0.04135176749720385, "acc_norm": 0.6444444444444445, "acc_norm_stderr": 0.04135176749720385 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7828947368421053, "acc_stderr": 0.03355045304882923, "acc_norm": 0.7828947368421053, "acc_norm_stderr": 0.03355045304882923 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.769811320754717, "acc_stderr": 0.025907897122408173, "acc_norm": 0.769811320754717, "acc_norm_stderr": 0.025907897122408173 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8402777777777778, "acc_stderr": 0.03063557897209328, "acc_norm": 0.8402777777777778, "acc_norm_stderr": 0.03063557897209328 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.7109826589595376, "acc_stderr": 0.034564257450869995, "acc_norm": 0.7109826589595376, "acc_norm_stderr": 0.034564257450869995 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.83, "acc_stderr": 0.03775251680686371, "acc_norm": 0.83, "acc_norm_stderr": 0.03775251680686371 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6510638297872341, "acc_stderr": 0.031158522131357783, "acc_norm": 0.6510638297872341, "acc_norm_stderr": 0.031158522131357783 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5701754385964912, "acc_stderr": 0.04657047260594964, "acc_norm": 0.5701754385964912, "acc_norm_stderr": 0.04657047260594964 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6689655172413793, "acc_stderr": 0.03921545312467122, "acc_norm": 0.6689655172413793, "acc_norm_stderr": 0.03921545312467122 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4497354497354497, "acc_stderr": 0.02562085704293665, "acc_norm": 0.4497354497354497, "acc_norm_stderr": 0.02562085704293665 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5079365079365079, "acc_stderr": 0.044715725362943486, "acc_norm": 0.5079365079365079, "acc_norm_stderr": 0.044715725362943486 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8096774193548387, "acc_stderr": 0.02233170761182307, "acc_norm": 0.8096774193548387, "acc_norm_stderr": 0.02233170761182307 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5566502463054187, "acc_stderr": 0.03495334582162933, "acc_norm": 0.5566502463054187, "acc_norm_stderr": 0.03495334582162933 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.793939393939394, "acc_stderr": 0.03158415324047709, "acc_norm": 0.793939393939394, "acc_norm_stderr": 0.03158415324047709 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8434343434343434, "acc_stderr": 0.025890520358141454, "acc_norm": 0.8434343434343434, "acc_norm_stderr": 0.025890520358141454 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.927461139896373, "acc_stderr": 0.018718998520678178, "acc_norm": 0.927461139896373, "acc_norm_stderr": 0.018718998520678178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7025641025641025, "acc_stderr": 0.023177408131465953, "acc_norm": 0.7025641025641025, "acc_norm_stderr": 0.023177408131465953 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37407407407407406, "acc_stderr": 0.029502861128955293, "acc_norm": 0.37407407407407406, "acc_norm_stderr": 0.029502861128955293 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7647058823529411, "acc_stderr": 0.027553614467863814, "acc_norm": 0.7647058823529411, "acc_norm_stderr": 0.027553614467863814 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.44370860927152317, "acc_stderr": 0.04056527902281731, "acc_norm": 0.44370860927152317, "acc_norm_stderr": 0.04056527902281731 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8678899082568807, "acc_stderr": 0.014517801914598238, "acc_norm": 0.8678899082568807, "acc_norm_stderr": 0.014517801914598238 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.03367462138896078, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.03367462138896078 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8438818565400844, "acc_stderr": 0.023627159460318684, "acc_norm": 0.8438818565400844, "acc_norm_stderr": 0.023627159460318684 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7085201793721974, "acc_stderr": 0.030500283176545857, "acc_norm": 0.7085201793721974, "acc_norm_stderr": 0.030500283176545857 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8320610687022901, "acc_stderr": 0.03278548537343138, "acc_norm": 0.8320610687022901, "acc_norm_stderr": 0.03278548537343138 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8842975206611571, "acc_stderr": 0.029199802455622814, "acc_norm": 0.8842975206611571, "acc_norm_stderr": 0.029199802455622814 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8148148148148148, "acc_stderr": 0.03755265865037181, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.03755265865037181 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7484662576687117, "acc_stderr": 0.03408997886857529, "acc_norm": 0.7484662576687117, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.8252427184466019, "acc_stderr": 0.037601780060266224, "acc_norm": 0.8252427184466019, "acc_norm_stderr": 0.037601780060266224 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9017094017094017, "acc_stderr": 0.019503444900757567, "acc_norm": 0.9017094017094017, "acc_norm_stderr": 0.019503444900757567 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.8, "acc_stderr": 0.040201512610368445, "acc_norm": 0.8, "acc_norm_stderr": 0.040201512610368445 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8659003831417624, "acc_stderr": 0.012185528166499983, "acc_norm": 0.8659003831417624, "acc_norm_stderr": 0.012185528166499983 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7832369942196532, "acc_stderr": 0.022183477668412856, "acc_norm": 0.7832369942196532, "acc_norm_stderr": 0.022183477668412856 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.5072625698324023, "acc_stderr": 0.0167207374051795, "acc_norm": 0.5072625698324023, "acc_norm_stderr": 0.0167207374051795 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7941176470588235, "acc_stderr": 0.0231527224394023, "acc_norm": 0.7941176470588235, "acc_norm_stderr": 0.0231527224394023 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7813504823151125, "acc_stderr": 0.023475581417861113, "acc_norm": 0.7813504823151125, "acc_norm_stderr": 0.023475581417861113 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8209876543209876, "acc_stderr": 0.02133086876212706, "acc_norm": 0.8209876543209876, "acc_norm_stderr": 0.02133086876212706 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5212765957446809, "acc_stderr": 0.029800481645628693, "acc_norm": 0.5212765957446809, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5195567144719687, "acc_stderr": 0.012760464028289295, "acc_norm": 0.5195567144719687, "acc_norm_stderr": 0.012760464028289295 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7794117647058824, "acc_stderr": 0.025187786660227255, "acc_norm": 0.7794117647058824, "acc_norm_stderr": 0.025187786660227255 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7173202614379085, "acc_stderr": 0.01821726955205344, "acc_norm": 0.7173202614379085, "acc_norm_stderr": 0.01821726955205344 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7181818181818181, "acc_stderr": 0.043091187099464585, "acc_norm": 0.7181818181818181, "acc_norm_stderr": 0.043091187099464585 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7755102040816326, "acc_stderr": 0.026711430555538408, "acc_norm": 0.7755102040816326, "acc_norm_stderr": 0.026711430555538408 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8905472636815921, "acc_stderr": 0.02207632610182466, "acc_norm": 0.8905472636815921, "acc_norm_stderr": 0.02207632610182466 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.89, "acc_stderr": 0.03144660377352203, "acc_norm": 0.89, "acc_norm_stderr": 0.03144660377352203 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.847953216374269, "acc_stderr": 0.02753912288906145, "acc_norm": 0.847953216374269, "acc_norm_stderr": 0.02753912288906145 }, "harness|truthfulqa:mc|0": { "mc1": 0.41615667074663404, "mc1_stderr": 0.017255657502903043, "mc2": 0.5674014937456792, "mc2_stderr": 0.015092873937221477 }, "harness|winogrande|5": { "acc": 0.8153117600631413, "acc_stderr": 0.010905978112156886 }, "harness|gsm8k|5": { "acc": 0.5405610310841547, "acc_stderr": 0.013727093010429788 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_JaeyeonKang__CCK_gony
[ "region:us" ]
2024-01-24T11:03:31+00:00
{"pretty_name": "Evaluation run of JaeyeonKang/CCK_gony", "dataset_summary": "Dataset automatically created during the evaluation run of model [JaeyeonKang/CCK_gony](https://huggingface.co/JaeyeonKang/CCK_gony) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_JaeyeonKang__CCK_gony\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T11:01:11.626042](https://huggingface.co/datasets/open-llm-leaderboard/details_JaeyeonKang__CCK_gony/blob/main/results_2024-01-24T11-01-11.626042.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6924973006675793,\n \"acc_stderr\": 0.030756010778234835,\n \"acc_norm\": 0.6971433352879666,\n \"acc_norm_stderr\": 0.031350039645753676,\n \"mc1\": 0.41615667074663404,\n \"mc1_stderr\": 0.017255657502903043,\n \"mc2\": 0.5674014937456792,\n \"mc2_stderr\": 0.015092873937221477\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6382252559726962,\n \"acc_stderr\": 0.014041957945038082,\n \"acc_norm\": 0.6911262798634812,\n \"acc_norm_stderr\": 0.013501770929344\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6756622186815375,\n \"acc_stderr\": 0.004671701705567238,\n \"acc_norm\": 0.867755427205736,\n \"acc_norm_stderr\": 0.003380641470989921\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237103,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237103\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.03355045304882923,\n \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.03355045304882923\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.769811320754717,\n \"acc_stderr\": 0.025907897122408173,\n \"acc_norm\": 0.769811320754717,\n \"acc_norm_stderr\": 0.025907897122408173\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8402777777777778,\n \"acc_stderr\": 0.03063557897209328,\n \"acc_norm\": 0.8402777777777778,\n \"acc_norm_stderr\": 0.03063557897209328\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.034564257450869995,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.034564257450869995\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6510638297872341,\n \"acc_stderr\": 0.031158522131357783,\n \"acc_norm\": 0.6510638297872341,\n \"acc_norm_stderr\": 0.031158522131357783\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5701754385964912,\n \"acc_stderr\": 0.04657047260594964,\n \"acc_norm\": 0.5701754385964912,\n \"acc_norm_stderr\": 0.04657047260594964\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6689655172413793,\n \"acc_stderr\": 0.03921545312467122,\n \"acc_norm\": 0.6689655172413793,\n \"acc_norm_stderr\": 0.03921545312467122\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4497354497354497,\n \"acc_stderr\": 0.02562085704293665,\n \"acc_norm\": 0.4497354497354497,\n \"acc_norm_stderr\": 0.02562085704293665\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8096774193548387,\n \"acc_stderr\": 0.02233170761182307,\n \"acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.02233170761182307\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5566502463054187,\n \"acc_stderr\": 0.03495334582162933,\n \"acc_norm\": 0.5566502463054187,\n \"acc_norm_stderr\": 0.03495334582162933\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.03158415324047709,\n \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.03158415324047709\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8434343434343434,\n \"acc_stderr\": 0.025890520358141454,\n \"acc_norm\": 0.8434343434343434,\n \"acc_norm_stderr\": 0.025890520358141454\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.927461139896373,\n \"acc_stderr\": 0.018718998520678178,\n \"acc_norm\": 0.927461139896373,\n \"acc_norm_stderr\": 0.018718998520678178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7025641025641025,\n \"acc_stderr\": 0.023177408131465953,\n \"acc_norm\": 0.7025641025641025,\n \"acc_norm_stderr\": 0.023177408131465953\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955293,\n \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955293\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.027553614467863814,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.027553614467863814\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.44370860927152317,\n \"acc_stderr\": 0.04056527902281731,\n \"acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.04056527902281731\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8678899082568807,\n \"acc_stderr\": 0.014517801914598238,\n \"acc_norm\": 0.8678899082568807,\n \"acc_norm_stderr\": 0.014517801914598238\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.03367462138896078,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.03367462138896078\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8438818565400844,\n \"acc_stderr\": 0.023627159460318684,\n \"acc_norm\": 0.8438818565400844,\n \"acc_norm_stderr\": 0.023627159460318684\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7085201793721974,\n \"acc_stderr\": 0.030500283176545857,\n \"acc_norm\": 0.7085201793721974,\n \"acc_norm_stderr\": 0.030500283176545857\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8320610687022901,\n \"acc_stderr\": 0.03278548537343138,\n \"acc_norm\": 0.8320610687022901,\n \"acc_norm_stderr\": 0.03278548537343138\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8842975206611571,\n \"acc_stderr\": 0.029199802455622814,\n \"acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.029199802455622814\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9017094017094017,\n \"acc_stderr\": 0.019503444900757567,\n \"acc_norm\": 0.9017094017094017,\n \"acc_norm_stderr\": 0.019503444900757567\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8659003831417624,\n \"acc_stderr\": 0.012185528166499983,\n \"acc_norm\": 0.8659003831417624,\n \"acc_norm_stderr\": 0.012185528166499983\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7832369942196532,\n \"acc_stderr\": 0.022183477668412856,\n \"acc_norm\": 0.7832369942196532,\n \"acc_norm_stderr\": 0.022183477668412856\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5072625698324023,\n \"acc_stderr\": 0.0167207374051795,\n \"acc_norm\": 0.5072625698324023,\n \"acc_norm_stderr\": 0.0167207374051795\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.0231527224394023,\n \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.0231527224394023\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7813504823151125,\n \"acc_stderr\": 0.023475581417861113,\n \"acc_norm\": 0.7813504823151125,\n \"acc_norm_stderr\": 0.023475581417861113\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8209876543209876,\n \"acc_stderr\": 0.02133086876212706,\n \"acc_norm\": 0.8209876543209876,\n \"acc_norm_stderr\": 0.02133086876212706\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5212765957446809,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.5212765957446809,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5195567144719687,\n \"acc_stderr\": 0.012760464028289295,\n \"acc_norm\": 0.5195567144719687,\n \"acc_norm_stderr\": 0.012760464028289295\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.025187786660227255,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.025187786660227255\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7173202614379085,\n \"acc_stderr\": 0.01821726955205344,\n \"acc_norm\": 0.7173202614379085,\n \"acc_norm_stderr\": 0.01821726955205344\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7755102040816326,\n \"acc_stderr\": 0.026711430555538408,\n \"acc_norm\": 0.7755102040816326,\n \"acc_norm_stderr\": 0.026711430555538408\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8905472636815921,\n \"acc_stderr\": 0.02207632610182466,\n \"acc_norm\": 0.8905472636815921,\n \"acc_norm_stderr\": 0.02207632610182466\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.02753912288906145,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.02753912288906145\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.41615667074663404,\n \"mc1_stderr\": 0.017255657502903043,\n \"mc2\": 0.5674014937456792,\n \"mc2_stderr\": 0.015092873937221477\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8153117600631413,\n \"acc_stderr\": 0.010905978112156886\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5405610310841547,\n \"acc_stderr\": 0.013727093010429788\n }\n}\n```", "repo_url": "https://huggingface.co/JaeyeonKang/CCK_gony", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|arc:challenge|25_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|gsm8k|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hellaswag|10_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T11-01-11.626042.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["**/details_harness|winogrande|5_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T11-01-11.626042.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T11_01_11.626042", "path": ["results_2024-01-24T11-01-11.626042.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T11-01-11.626042.parquet"]}]}]}
2024-01-24T11:03:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of JaeyeonKang/CCK_gony Dataset automatically created during the evaluation run of model JaeyeonKang/CCK_gony on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T11:01:11.626042(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of JaeyeonKang/CCK_gony\n\n\n\nDataset automatically created during the evaluation run of model JaeyeonKang/CCK_gony on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T11:01:11.626042(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of JaeyeonKang/CCK_gony\n\n\n\nDataset automatically created during the evaluation run of model JaeyeonKang/CCK_gony on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T11:01:11.626042(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
5ca13310dbe408c442790f09f4a8826cb292af50
# Dataset Card for Evaluation run of vanillaOVO/supermario_v1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [vanillaOVO/supermario_v1](https://huggingface.co/vanillaOVO/supermario_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_vanillaOVO__supermario_v1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T11:15:03.879337](https://huggingface.co/datasets/open-llm-leaderboard/details_vanillaOVO__supermario_v1/blob/main/results_2024-01-24T11-15-03.879337.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6522614481007412, "acc_stderr": 0.03211424191343289, "acc_norm": 0.651384484443393, "acc_norm_stderr": 0.03279077050133014, "mc1": 0.5556915544675642, "mc1_stderr": 0.017394586250743183, "mc2": 0.682309495837554, "mc2_stderr": 0.015287773873519487 }, "harness|arc:challenge|25": { "acc": 0.7133105802047781, "acc_stderr": 0.013214986329274776, "acc_norm": 0.7372013651877133, "acc_norm_stderr": 0.012862523175351335 }, "harness|hellaswag|10": { "acc": 0.7285401314479187, "acc_stderr": 0.0044380385833450876, "acc_norm": 0.8870742879904402, "acc_norm_stderr": 0.0031585512705264015 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6444444444444445, "acc_stderr": 0.04135176749720385, "acc_norm": 0.6444444444444445, "acc_norm_stderr": 0.04135176749720385 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6907894736842105, "acc_stderr": 0.037610708698674805, "acc_norm": 0.6907894736842105, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.720754716981132, "acc_stderr": 0.027611163402399715, "acc_norm": 0.720754716981132, "acc_norm_stderr": 0.027611163402399715 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03476590104304134, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03476590104304134 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.653179190751445, "acc_stderr": 0.036291466701596636, "acc_norm": 0.653179190751445, "acc_norm_stderr": 0.036291466701596636 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3627450980392157, "acc_stderr": 0.047840607041056527, "acc_norm": 0.3627450980392157, "acc_norm_stderr": 0.047840607041056527 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.04292346959909284, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909284 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5574468085106383, "acc_stderr": 0.03246956919789958, "acc_norm": 0.5574468085106383, "acc_norm_stderr": 0.03246956919789958 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482757, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482757 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.43386243386243384, "acc_stderr": 0.025525034382474894, "acc_norm": 0.43386243386243384, "acc_norm_stderr": 0.025525034382474894 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04444444444444449, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7774193548387097, "acc_stderr": 0.023664216671642518, "acc_norm": 0.7774193548387097, "acc_norm_stderr": 0.023664216671642518 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7757575757575758, "acc_stderr": 0.03256866661681102, "acc_norm": 0.7757575757575758, "acc_norm_stderr": 0.03256866661681102 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.028869778460267045, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.028869778460267045 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.917098445595855, "acc_stderr": 0.01989934131572178, "acc_norm": 0.917098445595855, "acc_norm_stderr": 0.01989934131572178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6794871794871795, "acc_stderr": 0.02366129639396428, "acc_norm": 0.6794871794871795, "acc_norm_stderr": 0.02366129639396428 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32592592592592595, "acc_stderr": 0.02857834836547308, "acc_norm": 0.32592592592592595, "acc_norm_stderr": 0.02857834836547308 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6764705882352942, "acc_stderr": 0.03038835355188679, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.03038835355188679 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.039439666991836285, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.039439666991836285 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8403669724770643, "acc_stderr": 0.01570349834846177, "acc_norm": 0.8403669724770643, "acc_norm_stderr": 0.01570349834846177 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5046296296296297, "acc_stderr": 0.03409825519163572, "acc_norm": 0.5046296296296297, "acc_norm_stderr": 0.03409825519163572 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8480392156862745, "acc_stderr": 0.025195658428931792, "acc_norm": 0.8480392156862745, "acc_norm_stderr": 0.025195658428931792 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7974683544303798, "acc_stderr": 0.026160568246601443, "acc_norm": 0.7974683544303798, "acc_norm_stderr": 0.026160568246601443 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.695067264573991, "acc_stderr": 0.030898610882477515, "acc_norm": 0.695067264573991, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.0364129708131373, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.0364129708131373 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.75, "acc_stderr": 0.04186091791394607, "acc_norm": 0.75, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615769, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615769 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.047268355537191, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.047268355537191 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.021586494001281365, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.021586494001281365 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.822477650063857, "acc_stderr": 0.01366423099583483, "acc_norm": 0.822477650063857, "acc_norm_stderr": 0.01366423099583483 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7369942196531792, "acc_stderr": 0.02370309952525818, "acc_norm": 0.7369942196531792, "acc_norm_stderr": 0.02370309952525818 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.43910614525139663, "acc_stderr": 0.01659802212058043, "acc_norm": 0.43910614525139663, "acc_norm_stderr": 0.01659802212058043 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7124183006535948, "acc_stderr": 0.025917806117147158, "acc_norm": 0.7124183006535948, "acc_norm_stderr": 0.025917806117147158 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.02549425935069491, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.02549425935069491 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7438271604938271, "acc_stderr": 0.0242885336377261, "acc_norm": 0.7438271604938271, "acc_norm_stderr": 0.0242885336377261 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5035460992907801, "acc_stderr": 0.02982674915328092, "acc_norm": 0.5035460992907801, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.46740547588005216, "acc_stderr": 0.012743072942653349, "acc_norm": 0.46740547588005216, "acc_norm_stderr": 0.012743072942653349 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6727941176470589, "acc_stderr": 0.028501452860396553, "acc_norm": 0.6727941176470589, "acc_norm_stderr": 0.028501452860396553 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6715686274509803, "acc_stderr": 0.018999707383162673, "acc_norm": 0.6715686274509803, "acc_norm_stderr": 0.018999707383162673 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.0282638899437846, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.0282638899437846 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8308457711442786, "acc_stderr": 0.02650859065623327, "acc_norm": 0.8308457711442786, "acc_norm_stderr": 0.02650859065623327 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.03487350880197769, "acc_norm": 0.86, "acc_norm_stderr": 0.03487350880197769 }, "harness|hendrycksTest-virology|5": { "acc": 0.572289156626506, "acc_stderr": 0.038515976837185335, "acc_norm": 0.572289156626506, "acc_norm_stderr": 0.038515976837185335 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.5556915544675642, "mc1_stderr": 0.017394586250743183, "mc2": 0.682309495837554, "mc2_stderr": 0.015287773873519487 }, "harness|winogrande|5": { "acc": 0.856353591160221, "acc_stderr": 0.009857280052696734 }, "harness|gsm8k|5": { "acc": 0.6823351023502654, "acc_stderr": 0.012824066621488838 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_vanillaOVO__supermario_v1
[ "region:us" ]
2024-01-24T11:08:01+00:00
{"pretty_name": "Evaluation run of vanillaOVO/supermario_v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [vanillaOVO/supermario_v1](https://huggingface.co/vanillaOVO/supermario_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vanillaOVO__supermario_v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T11:15:03.879337](https://huggingface.co/datasets/open-llm-leaderboard/details_vanillaOVO__supermario_v1/blob/main/results_2024-01-24T11-15-03.879337.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6522614481007412,\n \"acc_stderr\": 0.03211424191343289,\n \"acc_norm\": 0.651384484443393,\n \"acc_norm_stderr\": 0.03279077050133014,\n \"mc1\": 0.5556915544675642,\n \"mc1_stderr\": 0.017394586250743183,\n \"mc2\": 0.682309495837554,\n \"mc2_stderr\": 0.015287773873519487\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7133105802047781,\n \"acc_stderr\": 0.013214986329274776,\n \"acc_norm\": 0.7372013651877133,\n \"acc_norm_stderr\": 0.012862523175351335\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7285401314479187,\n \"acc_stderr\": 0.0044380385833450876,\n \"acc_norm\": 0.8870742879904402,\n \"acc_norm_stderr\": 0.0031585512705264015\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.047840607041056527,\n \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.047840607041056527\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474894,\n \"acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474894\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.917098445595855,\n \"acc_stderr\": 0.01989934131572178,\n \"acc_norm\": 0.917098445595855,\n \"acc_norm_stderr\": 0.01989934131572178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6794871794871795,\n \"acc_stderr\": 0.02366129639396428,\n \"acc_norm\": 0.6794871794871795,\n \"acc_norm_stderr\": 0.02366129639396428\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547308,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547308\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.03038835355188679,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.03038835355188679\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.039439666991836285,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.039439666991836285\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8403669724770643,\n \"acc_stderr\": 0.01570349834846177,\n \"acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.01570349834846177\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601443,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601443\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.0364129708131373,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.0364129708131373\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.822477650063857,\n \"acc_stderr\": 0.01366423099583483,\n \"acc_norm\": 0.822477650063857,\n \"acc_norm_stderr\": 0.01366423099583483\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.02370309952525818,\n \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.02370309952525818\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43910614525139663,\n \"acc_stderr\": 0.01659802212058043,\n \"acc_norm\": 0.43910614525139663,\n \"acc_norm_stderr\": 0.01659802212058043\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.025917806117147158,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.025917806117147158\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.0242885336377261,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.0242885336377261\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46740547588005216,\n \"acc_stderr\": 0.012743072942653349,\n \"acc_norm\": 0.46740547588005216,\n \"acc_norm_stderr\": 0.012743072942653349\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396553,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396553\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.0282638899437846,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.0282638899437846\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.02650859065623327,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.02650859065623327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5556915544675642,\n \"mc1_stderr\": 0.017394586250743183,\n \"mc2\": 0.682309495837554,\n \"mc2_stderr\": 0.015287773873519487\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.856353591160221,\n \"acc_stderr\": 0.009857280052696734\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6823351023502654,\n \"acc_stderr\": 0.012824066621488838\n }\n}\n```", "repo_url": "https://huggingface.co/vanillaOVO/supermario_v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|arc:challenge|25_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|arc:challenge|25_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|gsm8k|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|gsm8k|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hellaswag|10_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hellaswag|10_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T11-05-43.543267.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T11-15-03.879337.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["**/details_harness|winogrande|5_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["**/details_harness|winogrande|5_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T11-15-03.879337.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T11_05_43.543267", "path": ["results_2024-01-24T11-05-43.543267.parquet"]}, {"split": "2024_01_24T11_15_03.879337", "path": ["results_2024-01-24T11-15-03.879337.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T11-15-03.879337.parquet"]}]}]}
2024-01-24T11:17:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of vanillaOVO/supermario_v1 Dataset automatically created during the evaluation run of model vanillaOVO/supermario_v1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T11:15:03.879337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of vanillaOVO/supermario_v1\n\n\n\nDataset automatically created during the evaluation run of model vanillaOVO/supermario_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T11:15:03.879337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of vanillaOVO/supermario_v1\n\n\n\nDataset automatically created during the evaluation run of model vanillaOVO/supermario_v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T11:15:03.879337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
5880df797ab09f17101f05e4b861b49540806224
# Dataset of julia/ヅユリア/尤莉亚/율리아 (Nikke: Goddess of Victory) This is the dataset of julia/ヅユリア/尤莉亚/율리아 (Nikke: Goddess of Victory), containing 19 images and their tags. The core tags of this character are `bangs, breasts, red_eyes, short_hair, hair_ornament, hair_between_eyes, hair_flower, white_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 19 | 36.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/julia_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 19 | 16.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/julia_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 40 | 32.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/julia_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 19 | 29.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/julia_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 40 | 48.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/julia_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/julia_nikke', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, elbow_gloves, solo, white_gloves, black_dress, closed_mouth, looking_at_viewer, cleavage, grey_hair, holding_instrument, red_rose, violin | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | elbow_gloves | solo | white_gloves | black_dress | closed_mouth | looking_at_viewer | cleavage | grey_hair | holding_instrument | red_rose | violin | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------|:-------|:---------------|:--------------|:---------------|:--------------------|:-----------|:------------|:---------------------|:-----------|:---------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
CyberHarem/julia_nikke
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2024-01-24T11:23:32+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2024-01-24T11:27:55+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of julia/ヅユリア/尤莉亚/율리아 (Nikke: Goddess of Victory) ========================================================= This is the dataset of julia/ヅユリア/尤莉亚/율리아 (Nikke: Goddess of Victory), containing 19 images and their tags. The core tags of this character are 'bangs, breasts, red\_eyes, short\_hair, hair\_ornament, hair\_between\_eyes, hair\_flower, white\_hair', which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization). List of Packages ---------------- ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code List of Clusters ---------------- List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version ### Table Version
[ "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n", "### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.", "### Raw Text Version", "### Table Version" ]
e1ed03a01d4e049b1517c2bd3e1f9c0dd0509961
# Wikipedia 22-12 DE DPR For details about this dataset please see [telekom/wikipedia-22-12-de-dpr](https://github.com/telekom/wikipedia-22-12-de-dpr) on GitHub. ## Creator This data set is compiled and open sourced by [Philip May](https://may.la/) of [Deutsche Telekom](https://www.telekom.de/). ## Licensing ### The Code and Documentation Copyright (c) 2023-2024 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.de/) Licensed under the **MIT License** (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://github.com/telekom/mltb2/blob/main/LICENSE) in the repository. ### The Wikipedia Texts, Questions and Imperative Questions The Wikipedia texts are licensed under [CC BY-SA 4.0 Deed](https://creativecommons.org/licenses/by-sa/4.0/deed) by the corresponding authors of the [German Wikipedia](https://de.wikipedia.org/). The questions and imperative questions are copyright ([CC BY-SA 4.0 Deed](https://creativecommons.org/licenses/by-sa/4.0/deed)) by [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.de/). Indication of changes: - data source is the [Cohere/wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings) dataset on Hugging Face Hub - we took `wiki_id`, `title` and `text` - did some normalization and filtering - and merged the texts to an appropriate token count - details can be found in the respective notebooks
deutsche-telekom/wikipedia-22-12-de-dpr
[ "task_categories:question-answering", "size_categories:100K<n<1M", "language:de", "license:cc-by-sa-4.0", "region:us" ]
2024-01-24T11:36:48+00:00
{"language": ["de"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"]}
2024-01-24T11:44:19+00:00
[]
[ "de" ]
TAGS #task_categories-question-answering #size_categories-100K<n<1M #language-German #license-cc-by-sa-4.0 #region-us
# Wikipedia 22-12 DE DPR For details about this dataset please see telekom/wikipedia-22-12-de-dpr on GitHub. ## Creator This data set is compiled and open sourced by Philip May of Deutsche Telekom. ## Licensing ### The Code and Documentation Copyright (c) 2023-2024 Philip May, Deutsche Telekom AG Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License by reviewing the file LICENSE in the repository. ### The Wikipedia Texts, Questions and Imperative Questions The Wikipedia texts are licensed under CC BY-SA 4.0 Deed by the corresponding authors of the German Wikipedia. The questions and imperative questions are copyright (CC BY-SA 4.0 Deed) by Philip May, Deutsche Telekom AG. Indication of changes: - data source is the Cohere/wikipedia-22-12-de-embeddings dataset on Hugging Face Hub - we took 'wiki_id', 'title' and 'text' - did some normalization and filtering - and merged the texts to an appropriate token count - details can be found in the respective notebooks
[ "# Wikipedia 22-12 DE DPR\n\nFor details about this dataset please see\ntelekom/wikipedia-22-12-de-dpr\non GitHub.", "## Creator\n\nThis data set is compiled and open sourced by Philip May\nof Deutsche Telekom.", "## Licensing", "### The Code and Documentation\n\nCopyright (c) 2023-2024 Philip May, Deutsche Telekom AG\n\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License by reviewing the file\nLICENSE in the repository.", "### The Wikipedia Texts, Questions and Imperative Questions\n\nThe Wikipedia texts are licensed under CC BY-SA 4.0 Deed\nby the corresponding authors of the German Wikipedia. The questions and\nimperative questions are copyright (CC BY-SA 4.0 Deed) by\nPhilip May, Deutsche Telekom AG.\nIndication of changes:\n\n- data source is the Cohere/wikipedia-22-12-de-embeddings dataset on Hugging Face Hub\n- we took 'wiki_id', 'title' and 'text'\n- did some normalization and filtering\n- and merged the texts to an appropriate token count\n- details can be found in the respective notebooks" ]
[ "TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-German #license-cc-by-sa-4.0 #region-us \n", "# Wikipedia 22-12 DE DPR\n\nFor details about this dataset please see\ntelekom/wikipedia-22-12-de-dpr\non GitHub.", "## Creator\n\nThis data set is compiled and open sourced by Philip May\nof Deutsche Telekom.", "## Licensing", "### The Code and Documentation\n\nCopyright (c) 2023-2024 Philip May, Deutsche Telekom AG\n\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License by reviewing the file\nLICENSE in the repository.", "### The Wikipedia Texts, Questions and Imperative Questions\n\nThe Wikipedia texts are licensed under CC BY-SA 4.0 Deed\nby the corresponding authors of the German Wikipedia. The questions and\nimperative questions are copyright (CC BY-SA 4.0 Deed) by\nPhilip May, Deutsche Telekom AG.\nIndication of changes:\n\n- data source is the Cohere/wikipedia-22-12-de-embeddings dataset on Hugging Face Hub\n- we took 'wiki_id', 'title' and 'text'\n- did some normalization and filtering\n- and merged the texts to an appropriate token count\n- details can be found in the respective notebooks" ]
57f99cb791369f3380df222272569724bc218c41
# A new and empty dataset Write something to introduce the project ## Features Attract users ## A guide for developers [Developers' Guide](developers.md)
czxdev/test-data
[ "region:us" ]
2024-01-24T12:49:19+00:00
{}
2024-01-24T12:56:58+00:00
[]
[]
TAGS #region-us
# A new and empty dataset Write something to introduce the project ## Features Attract users ## A guide for developers Developers' Guide
[ "# A new and empty dataset\n\nWrite something to introduce the project", "## Features\n\nAttract users", "## A guide for developers\nDevelopers' Guide" ]
[ "TAGS\n#region-us \n", "# A new and empty dataset\n\nWrite something to introduce the project", "## Features\n\nAttract users", "## A guide for developers\nDevelopers' Guide" ]
51868ed0e7708a2b9933c56b9edf7f09428330c6
# Dataset Card for Evaluation run of Weyaxi/Einstein-bagel-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Weyaxi/Einstein-bagel-7B](https://huggingface.co/Weyaxi/Einstein-bagel-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Weyaxi__Einstein-bagel-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T13:17:52.314326](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-bagel-7B/blob/main/results_2024-01-24T13-17-52.314326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6319266433643548, "acc_stderr": 0.032455580300510795, "acc_norm": 0.6389579812660248, "acc_norm_stderr": 0.03312727758951937, "mc1": 0.4589963280293758, "mc1_stderr": 0.017444544447661196, "mc2": 0.6333126520095955, "mc2_stderr": 0.015483397855951944 }, "harness|arc:challenge|25": { "acc": 0.6322525597269625, "acc_stderr": 0.01409099561816848, "acc_norm": 0.6689419795221843, "acc_norm_stderr": 0.01375206241981783 }, "harness|hellaswag|10": { "acc": 0.6609241187014538, "acc_stderr": 0.0047242814878193755, "acc_norm": 0.8481378211511651, "acc_norm_stderr": 0.003581537847581781 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.046482319871173156, "acc_norm": 0.31, "acc_norm_stderr": 0.046482319871173156 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5703703703703704, "acc_stderr": 0.042763494943765995, "acc_norm": 0.5703703703703704, "acc_norm_stderr": 0.042763494943765995 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6710526315789473, "acc_stderr": 0.038234289699266046, "acc_norm": 0.6710526315789473, "acc_norm_stderr": 0.038234289699266046 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6716981132075471, "acc_stderr": 0.028901593612411784, "acc_norm": 0.6716981132075471, "acc_norm_stderr": 0.028901593612411784 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7361111111111112, "acc_stderr": 0.03685651095897532, "acc_norm": 0.7361111111111112, "acc_norm_stderr": 0.03685651095897532 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.048241815132442176, "acc_norm": 0.36, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.39215686274509803, "acc_stderr": 0.04858083574266345, "acc_norm": 0.39215686274509803, "acc_norm_stderr": 0.04858083574266345 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5276595744680851, "acc_stderr": 0.03263597118409769, "acc_norm": 0.5276595744680851, "acc_norm_stderr": 0.03263597118409769 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.47368421052631576, "acc_stderr": 0.04697085136647863, "acc_norm": 0.47368421052631576, "acc_norm_stderr": 0.04697085136647863 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5517241379310345, "acc_stderr": 0.04144311810878151, "acc_norm": 0.5517241379310345, "acc_norm_stderr": 0.04144311810878151 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.40476190476190477, "acc_stderr": 0.025279850397404907, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.025279850397404907 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5079365079365079, "acc_stderr": 0.044715725362943486, "acc_norm": 0.5079365079365079, "acc_norm_stderr": 0.044715725362943486 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7806451612903226, "acc_stderr": 0.023540799358723295, "acc_norm": 0.7806451612903226, "acc_norm_stderr": 0.023540799358723295 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.04725815626252609, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252609 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7818181818181819, "acc_stderr": 0.03225078108306289, "acc_norm": 0.7818181818181819, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945637, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945637 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8601036269430051, "acc_stderr": 0.025033870583015184, "acc_norm": 0.8601036269430051, "acc_norm_stderr": 0.025033870583015184 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6487179487179487, "acc_stderr": 0.024203665177902803, "acc_norm": 0.6487179487179487, "acc_norm_stderr": 0.024203665177902803 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3296296296296296, "acc_stderr": 0.02866120111652458, "acc_norm": 0.3296296296296296, "acc_norm_stderr": 0.02866120111652458 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7352941176470589, "acc_stderr": 0.028657491285071977, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.028657491285071977 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2980132450331126, "acc_stderr": 0.037345356767871984, "acc_norm": 0.2980132450331126, "acc_norm_stderr": 0.037345356767871984 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8495412844036697, "acc_stderr": 0.015328563932669237, "acc_norm": 0.8495412844036697, "acc_norm_stderr": 0.015328563932669237 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5740740740740741, "acc_stderr": 0.03372343271653062, "acc_norm": 0.5740740740740741, "acc_norm_stderr": 0.03372343271653062 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.803921568627451, "acc_stderr": 0.027865942286639325, "acc_norm": 0.803921568627451, "acc_norm_stderr": 0.027865942286639325 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.026750826994676177, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.026750826994676177 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7557251908396947, "acc_stderr": 0.03768335959728745, "acc_norm": 0.7557251908396947, "acc_norm_stderr": 0.03768335959728745 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7355371900826446, "acc_stderr": 0.04026187527591207, "acc_norm": 0.7355371900826446, "acc_norm_stderr": 0.04026187527591207 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7962962962962963, "acc_stderr": 0.03893542518824847, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.03893542518824847 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7484662576687117, "acc_stderr": 0.03408997886857529, "acc_norm": 0.7484662576687117, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406964, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406964 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8173690932311622, "acc_stderr": 0.013816335389973133, "acc_norm": 0.8173690932311622, "acc_norm_stderr": 0.013816335389973133 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7109826589595376, "acc_stderr": 0.02440517393578323, "acc_norm": 0.7109826589595376, "acc_norm_stderr": 0.02440517393578323 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.40670391061452515, "acc_stderr": 0.01642881191589886, "acc_norm": 0.40670391061452515, "acc_norm_stderr": 0.01642881191589886 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7450980392156863, "acc_stderr": 0.024954184324879912, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.024954184324879912 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.02558306248998481, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.02558306248998481 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7006172839506173, "acc_stderr": 0.025483115601195455, "acc_norm": 0.7006172839506173, "acc_norm_stderr": 0.025483115601195455 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46099290780141844, "acc_stderr": 0.029736592526424438, "acc_norm": 0.46099290780141844, "acc_norm_stderr": 0.029736592526424438 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.47522816166883963, "acc_stderr": 0.012754553719781753, "acc_norm": 0.47522816166883963, "acc_norm_stderr": 0.012754553719781753 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6691176470588235, "acc_stderr": 0.02858270975389845, "acc_norm": 0.6691176470588235, "acc_norm_stderr": 0.02858270975389845 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6601307189542484, "acc_stderr": 0.01916241858862355, "acc_norm": 0.6601307189542484, "acc_norm_stderr": 0.01916241858862355 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128438, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128438 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.02587064676616913, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.02587064676616913 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.0377525168068637, "acc_norm": 0.83, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-virology|5": { "acc": 0.5120481927710844, "acc_stderr": 0.03891364495835817, "acc_norm": 0.5120481927710844, "acc_norm_stderr": 0.03891364495835817 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.02917088550072767, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.02917088550072767 }, "harness|truthfulqa:mc|0": { "mc1": 0.4589963280293758, "mc1_stderr": 0.017444544447661196, "mc2": 0.6333126520095955, "mc2_stderr": 0.015483397855951944 }, "harness|winogrande|5": { "acc": 0.7916337805840569, "acc_stderr": 0.011414554399987726 }, "harness|gsm8k|5": { "acc": 0.2812736921910538, "acc_stderr": 0.01238478931094024 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Weyaxi__Einstein-bagel-7B
[ "region:us" ]
2024-01-24T13:20:11+00:00
{"pretty_name": "Evaluation run of Weyaxi/Einstein-bagel-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/Einstein-bagel-7B](https://huggingface.co/Weyaxi/Einstein-bagel-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__Einstein-bagel-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T13:17:52.314326](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-bagel-7B/blob/main/results_2024-01-24T13-17-52.314326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6319266433643548,\n \"acc_stderr\": 0.032455580300510795,\n \"acc_norm\": 0.6389579812660248,\n \"acc_norm_stderr\": 0.03312727758951937,\n \"mc1\": 0.4589963280293758,\n \"mc1_stderr\": 0.017444544447661196,\n \"mc2\": 0.6333126520095955,\n \"mc2_stderr\": 0.015483397855951944\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6322525597269625,\n \"acc_stderr\": 0.01409099561816848,\n \"acc_norm\": 0.6689419795221843,\n \"acc_norm_stderr\": 0.01375206241981783\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6609241187014538,\n \"acc_stderr\": 0.0047242814878193755,\n \"acc_norm\": 0.8481378211511651,\n \"acc_norm_stderr\": 0.003581537847581781\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.046482319871173156,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.046482319871173156\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n \"acc_stderr\": 0.042763494943765995,\n \"acc_norm\": 0.5703703703703704,\n \"acc_norm_stderr\": 0.042763494943765995\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.028901593612411784,\n \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.028901593612411784\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04697085136647863,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04697085136647863\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878151,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878151\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404907,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404907\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945637,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945637\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015184,\n \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015184\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6487179487179487,\n \"acc_stderr\": 0.024203665177902803,\n \"acc_norm\": 0.6487179487179487,\n \"acc_norm_stderr\": 0.024203665177902803\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652458,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652458\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.028657491285071977,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.028657491285071977\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.03372343271653062,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.03372343271653062\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639325,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639325\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676177,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676177\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.03768335959728745,\n \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.03768335959728745\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591207,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591207\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n \"acc_stderr\": 0.013816335389973133,\n \"acc_norm\": 0.8173690932311622,\n \"acc_norm_stderr\": 0.013816335389973133\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40670391061452515,\n \"acc_stderr\": 0.01642881191589886,\n \"acc_norm\": 0.40670391061452515,\n \"acc_norm_stderr\": 0.01642881191589886\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.024954184324879912,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.024954184324879912\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7006172839506173,\n \"acc_stderr\": 0.025483115601195455,\n \"acc_norm\": 0.7006172839506173,\n \"acc_norm_stderr\": 0.025483115601195455\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.029736592526424438,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.029736592526424438\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47522816166883963,\n \"acc_stderr\": 0.012754553719781753,\n \"acc_norm\": 0.47522816166883963,\n \"acc_norm_stderr\": 0.012754553719781753\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389845,\n \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389845\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.01916241858862355,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.01916241858862355\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128438,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128438\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4589963280293758,\n \"mc1_stderr\": 0.017444544447661196,\n \"mc2\": 0.6333126520095955,\n \"mc2_stderr\": 0.015483397855951944\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7916337805840569,\n \"acc_stderr\": 0.011414554399987726\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2812736921910538,\n \"acc_stderr\": 0.01238478931094024\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/Einstein-bagel-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|arc:challenge|25_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|gsm8k|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hellaswag|10_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T13-17-52.314326.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["**/details_harness|winogrande|5_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T13-17-52.314326.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T13_17_52.314326", "path": ["results_2024-01-24T13-17-52.314326.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T13-17-52.314326.parquet"]}]}]}
2024-01-24T13:20:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Weyaxi/Einstein-bagel-7B Dataset automatically created during the evaluation run of model Weyaxi/Einstein-bagel-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T13:17:52.314326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Weyaxi/Einstein-bagel-7B\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/Einstein-bagel-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T13:17:52.314326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Weyaxi/Einstein-bagel-7B\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/Einstein-bagel-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T13:17:52.314326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
e55c40ca461255225cd463191a43478d50a1ed6f
# Dataset Card for Evaluation run of FelixChao/WestSeverus-7B-DPO-v2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_FelixChao__WestSeverus-7B-DPO-v2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T13:24:18.891595](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__WestSeverus-7B-DPO-v2/blob/main/results_2024-01-24T13-24-18.891595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6535299062128674, "acc_stderr": 0.032051357947894016, "acc_norm": 0.6530323423754857, "acc_norm_stderr": 0.03272062624963206, "mc1": 0.5471236230110159, "mc1_stderr": 0.01742558984831402, "mc2": 0.7236632495570844, "mc2_stderr": 0.014431747881822006 }, "harness|arc:challenge|25": { "acc": 0.6885665529010239, "acc_stderr": 0.01353247209985094, "acc_norm": 0.7141638225255973, "acc_norm_stderr": 0.013203196088537376 }, "harness|hellaswag|10": { "acc": 0.6955785700059749, "acc_stderr": 0.004592215118295279, "acc_norm": 0.882692690699064, "acc_norm_stderr": 0.003211284760701656 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6666666666666666, "acc_stderr": 0.04072314811876837, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7039473684210527, "acc_stderr": 0.03715062154998904, "acc_norm": 0.7039473684210527, "acc_norm_stderr": 0.03715062154998904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.63, "acc_stderr": 0.04852365870939099, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7018867924528301, "acc_stderr": 0.028152837942493875, "acc_norm": 0.7018867924528301, "acc_norm_stderr": 0.028152837942493875 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7847222222222222, "acc_stderr": 0.03437079344106135, "acc_norm": 0.7847222222222222, "acc_norm_stderr": 0.03437079344106135 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6878612716763006, "acc_stderr": 0.03533133389323657, "acc_norm": 0.6878612716763006, "acc_norm_stderr": 0.03533133389323657 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.04292346959909283, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.574468085106383, "acc_stderr": 0.03232146916224468, "acc_norm": 0.574468085106383, "acc_norm_stderr": 0.03232146916224468 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5517241379310345, "acc_stderr": 0.04144311810878152, "acc_norm": 0.5517241379310345, "acc_norm_stderr": 0.04144311810878152 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.42592592592592593, "acc_stderr": 0.02546714904546955, "acc_norm": 0.42592592592592593, "acc_norm_stderr": 0.02546714904546955 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4603174603174603, "acc_stderr": 0.04458029125470973, "acc_norm": 0.4603174603174603, "acc_norm_stderr": 0.04458029125470973 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7806451612903226, "acc_stderr": 0.0235407993587233, "acc_norm": 0.7806451612903226, "acc_norm_stderr": 0.0235407993587233 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4827586206896552, "acc_stderr": 0.035158955511657, "acc_norm": 0.4827586206896552, "acc_norm_stderr": 0.035158955511657 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.03317505930009181, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.03317505930009181 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.803030303030303, "acc_stderr": 0.028335609732463362, "acc_norm": 0.803030303030303, "acc_norm_stderr": 0.028335609732463362 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8860103626943006, "acc_stderr": 0.022935144053919443, "acc_norm": 0.8860103626943006, "acc_norm_stderr": 0.022935144053919443 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6717948717948717, "acc_stderr": 0.023807633198657266, "acc_norm": 0.6717948717948717, "acc_norm_stderr": 0.023807633198657266 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34444444444444444, "acc_stderr": 0.02897264888484427, "acc_norm": 0.34444444444444444, "acc_norm_stderr": 0.02897264888484427 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6680672268907563, "acc_stderr": 0.03058869701378364, "acc_norm": 0.6680672268907563, "acc_norm_stderr": 0.03058869701378364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8458715596330275, "acc_stderr": 0.015480826865374303, "acc_norm": 0.8458715596330275, "acc_norm_stderr": 0.015480826865374303 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5092592592592593, "acc_stderr": 0.034093869469927006, "acc_norm": 0.5092592592592593, "acc_norm_stderr": 0.034093869469927006 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8284313725490197, "acc_stderr": 0.026460569561240644, "acc_norm": 0.8284313725490197, "acc_norm_stderr": 0.026460569561240644 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.02553010046023349, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.02553010046023349 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8091603053435115, "acc_stderr": 0.03446513350752598, "acc_norm": 0.8091603053435115, "acc_norm_stderr": 0.03446513350752598 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.75, "acc_stderr": 0.04186091791394607, "acc_norm": 0.75, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7423312883435583, "acc_stderr": 0.03436150827846917, "acc_norm": 0.7423312883435583, "acc_norm_stderr": 0.03436150827846917 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.039891398595317706, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.039891398595317706 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8803418803418803, "acc_stderr": 0.021262719400406974, "acc_norm": 0.8803418803418803, "acc_norm_stderr": 0.021262719400406974 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8212005108556832, "acc_stderr": 0.013702643715368983, "acc_norm": 0.8212005108556832, "acc_norm_stderr": 0.013702643715368983 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7398843930635838, "acc_stderr": 0.023618678310069356, "acc_norm": 0.7398843930635838, "acc_norm_stderr": 0.023618678310069356 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4044692737430168, "acc_stderr": 0.01641444091729315, "acc_norm": 0.4044692737430168, "acc_norm_stderr": 0.01641444091729315 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7156862745098039, "acc_stderr": 0.025829163272757482, "acc_norm": 0.7156862745098039, "acc_norm_stderr": 0.025829163272757482 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.025494259350694912, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.025494259350694912 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7407407407407407, "acc_stderr": 0.02438366553103545, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.02438366553103545 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.47131681877444587, "acc_stderr": 0.012749206007657476, "acc_norm": 0.47131681877444587, "acc_norm_stderr": 0.012749206007657476 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6691176470588235, "acc_stderr": 0.028582709753898445, "acc_norm": 0.6691176470588235, "acc_norm_stderr": 0.028582709753898445 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6830065359477124, "acc_stderr": 0.018824219512706207, "acc_norm": 0.6830065359477124, "acc_norm_stderr": 0.018824219512706207 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7, "acc_stderr": 0.04389311454644286, "acc_norm": 0.7, "acc_norm_stderr": 0.04389311454644286 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128448, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128448 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.025870646766169136, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.025870646766169136 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.5471236230110159, "mc1_stderr": 0.01742558984831402, "mc2": 0.7236632495570844, "mc2_stderr": 0.014431747881822006 }, "harness|winogrande|5": { "acc": 0.8326756116811366, "acc_stderr": 0.010490608806828075 }, "harness|gsm8k|5": { "acc": 0.7164518574677786, "acc_stderr": 0.012415070917508118 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_FelixChao__WestSeverus-7B-DPO-v2
[ "region:us" ]
2024-01-24T13:26:40+00:00
{"pretty_name": "Evaluation run of FelixChao/WestSeverus-7B-DPO-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FelixChao__WestSeverus-7B-DPO-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T13:24:18.891595](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__WestSeverus-7B-DPO-v2/blob/main/results_2024-01-24T13-24-18.891595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6535299062128674,\n \"acc_stderr\": 0.032051357947894016,\n \"acc_norm\": 0.6530323423754857,\n \"acc_norm_stderr\": 0.03272062624963206,\n \"mc1\": 0.5471236230110159,\n \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.7236632495570844,\n \"mc2_stderr\": 0.014431747881822006\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6885665529010239,\n \"acc_stderr\": 0.01353247209985094,\n \"acc_norm\": 0.7141638225255973,\n \"acc_norm_stderr\": 0.013203196088537376\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6955785700059749,\n \"acc_stderr\": 0.004592215118295279,\n \"acc_norm\": 0.882692690699064,\n \"acc_norm_stderr\": 0.003211284760701656\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493875,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493875\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.03533133389323657,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.03533133389323657\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n \"acc_stderr\": 0.0235407993587233,\n \"acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.0235407993587233\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.035158955511657,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.035158955511657\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.028335609732463362,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.028335609732463362\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240644,\n \"acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240644\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.03446513350752598,\n \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.03446513350752598\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n \"acc_stderr\": 0.021262719400406974,\n \"acc_norm\": 0.8803418803418803,\n \"acc_norm_stderr\": 0.021262719400406974\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n \"acc_stderr\": 0.013702643715368983,\n \"acc_norm\": 0.8212005108556832,\n \"acc_norm_stderr\": 0.013702643715368983\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4044692737430168,\n \"acc_stderr\": 0.01641444091729315,\n \"acc_norm\": 0.4044692737430168,\n \"acc_norm_stderr\": 0.01641444091729315\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.02438366553103545,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.02438366553103545\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47131681877444587,\n \"acc_stderr\": 0.012749206007657476,\n \"acc_norm\": 0.47131681877444587,\n \"acc_norm_stderr\": 0.012749206007657476\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.028582709753898445,\n \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.028582709753898445\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706207,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706207\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04389311454644286,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04389311454644286\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5471236230110159,\n \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.7236632495570844,\n \"mc2_stderr\": 0.014431747881822006\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8326756116811366,\n \"acc_stderr\": 0.010490608806828075\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7164518574677786,\n \"acc_stderr\": 0.012415070917508118\n }\n}\n```", "repo_url": "https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|arc:challenge|25_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|gsm8k|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hellaswag|10_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T13-24-18.891595.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["**/details_harness|winogrande|5_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T13-24-18.891595.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T13_24_18.891595", "path": ["results_2024-01-24T13-24-18.891595.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T13-24-18.891595.parquet"]}]}]}
2024-01-24T13:27:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of FelixChao/WestSeverus-7B-DPO-v2 Dataset automatically created during the evaluation run of model FelixChao/WestSeverus-7B-DPO-v2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T13:24:18.891595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of FelixChao/WestSeverus-7B-DPO-v2\n\n\n\nDataset automatically created during the evaluation run of model FelixChao/WestSeverus-7B-DPO-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T13:24:18.891595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of FelixChao/WestSeverus-7B-DPO-v2\n\n\n\nDataset automatically created during the evaluation run of model FelixChao/WestSeverus-7B-DPO-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T13:24:18.891595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
f0ddc28af938ede18f70278b8e25599d601de2b2
# Dataset Card for Evaluation run of gradientai/v-alpha-tross <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [gradientai/v-alpha-tross](https://huggingface.co/gradientai/v-alpha-tross) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_gradientai__v-alpha-tross", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-25T10:09:36.875162](https://huggingface.co/datasets/open-llm-leaderboard/details_gradientai__v-alpha-tross/blob/main/results_2024-01-25T10-09-36.875162.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.7035931528833435, "acc_stderr": 0.030352450067599814, "acc_norm": 0.7068411118887601, "acc_norm_stderr": 0.030946357887990564, "mc1": 0.4785801713586291, "mc1_stderr": 0.01748743214471181, "mc2": 0.6521377907591408, "mc2_stderr": 0.014862743520821054 }, "harness|arc:challenge|25": { "acc": 0.6732081911262798, "acc_stderr": 0.013706665975587333, "acc_norm": 0.7192832764505119, "acc_norm_stderr": 0.01313123812697558 }, "harness|hellaswag|10": { "acc": 0.6760605457080263, "acc_stderr": 0.0046702081285792335, "acc_norm": 0.8681537542322246, "acc_norm_stderr": 0.0033763209559167064 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6444444444444445, "acc_stderr": 0.04135176749720385, "acc_norm": 0.6444444444444445, "acc_norm_stderr": 0.04135176749720385 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.8157894736842105, "acc_stderr": 0.0315469804508223, "acc_norm": 0.8157894736842105, "acc_norm_stderr": 0.0315469804508223 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768081, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768081 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7245283018867924, "acc_stderr": 0.027495663683724057, "acc_norm": 0.7245283018867924, "acc_norm_stderr": 0.027495663683724057 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8472222222222222, "acc_stderr": 0.030085743248565663, "acc_norm": 0.8472222222222222, "acc_norm_stderr": 0.030085743248565663 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6936416184971098, "acc_stderr": 0.035149425512674394, "acc_norm": 0.6936416184971098, "acc_norm_stderr": 0.035149425512674394 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.04878608714466996, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.04878608714466996 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6851063829787234, "acc_stderr": 0.030363582197238174, "acc_norm": 0.6851063829787234, "acc_norm_stderr": 0.030363582197238174 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6413793103448275, "acc_stderr": 0.039966295748767186, "acc_norm": 0.6413793103448275, "acc_norm_stderr": 0.039966295748767186 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4523809523809524, "acc_stderr": 0.025634258115554955, "acc_norm": 0.4523809523809524, "acc_norm_stderr": 0.025634258115554955 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.47619047619047616, "acc_stderr": 0.04467062628403273, "acc_norm": 0.47619047619047616, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8225806451612904, "acc_stderr": 0.021732540689329286, "acc_norm": 0.8225806451612904, "acc_norm_stderr": 0.021732540689329286 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8242424242424242, "acc_stderr": 0.02972094300622445, "acc_norm": 0.8242424242424242, "acc_norm_stderr": 0.02972094300622445 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8888888888888888, "acc_stderr": 0.022390787638216773, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.022390787638216773 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.927461139896373, "acc_stderr": 0.018718998520678175, "acc_norm": 0.927461139896373, "acc_norm_stderr": 0.018718998520678175 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7076923076923077, "acc_stderr": 0.02306043838085774, "acc_norm": 0.7076923076923077, "acc_norm_stderr": 0.02306043838085774 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131143, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131143 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7773109243697479, "acc_stderr": 0.02702543349888239, "acc_norm": 0.7773109243697479, "acc_norm_stderr": 0.02702543349888239 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4900662251655629, "acc_stderr": 0.04081677107248436, "acc_norm": 0.4900662251655629, "acc_norm_stderr": 0.04081677107248436 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8880733944954129, "acc_stderr": 0.013517352714958792, "acc_norm": 0.8880733944954129, "acc_norm_stderr": 0.013517352714958792 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6157407407407407, "acc_stderr": 0.03317354514310742, "acc_norm": 0.6157407407407407, "acc_norm_stderr": 0.03317354514310742 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9313725490196079, "acc_stderr": 0.017744453647073315, "acc_norm": 0.9313725490196079, "acc_norm_stderr": 0.017744453647073315 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8776371308016878, "acc_stderr": 0.021331741829746793, "acc_norm": 0.8776371308016878, "acc_norm_stderr": 0.021331741829746793 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7937219730941704, "acc_stderr": 0.02715715047956382, "acc_norm": 0.7937219730941704, "acc_norm_stderr": 0.02715715047956382 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8854961832061069, "acc_stderr": 0.027927473753597446, "acc_norm": 0.8854961832061069, "acc_norm_stderr": 0.027927473753597446 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8842975206611571, "acc_stderr": 0.02919980245562281, "acc_norm": 0.8842975206611571, "acc_norm_stderr": 0.02919980245562281 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8148148148148148, "acc_stderr": 0.03755265865037181, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.03755265865037181 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8220858895705522, "acc_stderr": 0.03004735765580663, "acc_norm": 0.8220858895705522, "acc_norm_stderr": 0.03004735765580663 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5446428571428571, "acc_stderr": 0.04726835553719098, "acc_norm": 0.5446428571428571, "acc_norm_stderr": 0.04726835553719098 }, "harness|hendrycksTest-management|5": { "acc": 0.8058252427184466, "acc_stderr": 0.03916667762822582, "acc_norm": 0.8058252427184466, "acc_norm_stderr": 0.03916667762822582 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8888888888888888, "acc_stderr": 0.020588491316092375, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.020588491316092375 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.04560480215720684, "acc_norm": 0.71, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8697318007662835, "acc_stderr": 0.012036729568216052, "acc_norm": 0.8697318007662835, "acc_norm_stderr": 0.012036729568216052 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7716763005780347, "acc_stderr": 0.022598703804321628, "acc_norm": 0.7716763005780347, "acc_norm_stderr": 0.022598703804321628 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.5698324022346368, "acc_stderr": 0.01655860163604103, "acc_norm": 0.5698324022346368, "acc_norm_stderr": 0.01655860163604103 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7647058823529411, "acc_stderr": 0.024288619466046105, "acc_norm": 0.7647058823529411, "acc_norm_stderr": 0.024288619466046105 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7684887459807074, "acc_stderr": 0.023956532766639133, "acc_norm": 0.7684887459807074, "acc_norm_stderr": 0.023956532766639133 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8209876543209876, "acc_stderr": 0.02133086876212706, "acc_norm": 0.8209876543209876, "acc_norm_stderr": 0.02133086876212706 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5425531914893617, "acc_stderr": 0.029719281272236834, "acc_norm": 0.5425531914893617, "acc_norm_stderr": 0.029719281272236834 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5541069100391134, "acc_stderr": 0.012695244711379783, "acc_norm": 0.5541069100391134, "acc_norm_stderr": 0.012695244711379783 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.75, "acc_stderr": 0.026303648393696036, "acc_norm": 0.75, "acc_norm_stderr": 0.026303648393696036 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7647058823529411, "acc_stderr": 0.017160587235046352, "acc_norm": 0.7647058823529411, "acc_norm_stderr": 0.017160587235046352 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7363636363636363, "acc_stderr": 0.04220224692971987, "acc_norm": 0.7363636363636363, "acc_norm_stderr": 0.04220224692971987 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8040816326530612, "acc_stderr": 0.025409301953225678, "acc_norm": 0.8040816326530612, "acc_norm_stderr": 0.025409301953225678 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8805970149253731, "acc_stderr": 0.02292879327721974, "acc_norm": 0.8805970149253731, "acc_norm_stderr": 0.02292879327721974 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.89, "acc_stderr": 0.03144660377352203, "acc_norm": 0.89, "acc_norm_stderr": 0.03144660377352203 }, "harness|hendrycksTest-virology|5": { "acc": 0.5421686746987951, "acc_stderr": 0.0387862677100236, "acc_norm": 0.5421686746987951, "acc_norm_stderr": 0.0387862677100236 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8596491228070176, "acc_stderr": 0.0266405825391332, "acc_norm": 0.8596491228070176, "acc_norm_stderr": 0.0266405825391332 }, "harness|truthfulqa:mc|0": { "mc1": 0.4785801713586291, "mc1_stderr": 0.01748743214471181, "mc2": 0.6521377907591408, "mc2_stderr": 0.014862743520821054 }, "harness|winogrande|5": { "acc": 0.8358326756116812, "acc_stderr": 0.0104108497752228 }, "harness|gsm8k|5": { "acc": 0.6178923426838514, "acc_stderr": 0.013384173935648494 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_gradientai__v-alpha-tross
[ "region:us" ]
2024-01-24T13:44:34+00:00
{"pretty_name": "Evaluation run of gradientai/v-alpha-tross", "dataset_summary": "Dataset automatically created during the evaluation run of model [gradientai/v-alpha-tross](https://huggingface.co/gradientai/v-alpha-tross) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_gradientai__v-alpha-tross\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-25T10:09:36.875162](https://huggingface.co/datasets/open-llm-leaderboard/details_gradientai__v-alpha-tross/blob/main/results_2024-01-25T10-09-36.875162.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7035931528833435,\n \"acc_stderr\": 0.030352450067599814,\n \"acc_norm\": 0.7068411118887601,\n \"acc_norm_stderr\": 0.030946357887990564,\n \"mc1\": 0.4785801713586291,\n \"mc1_stderr\": 0.01748743214471181,\n \"mc2\": 0.6521377907591408,\n \"mc2_stderr\": 0.014862743520821054\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6732081911262798,\n \"acc_stderr\": 0.013706665975587333,\n \"acc_norm\": 0.7192832764505119,\n \"acc_norm_stderr\": 0.01313123812697558\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6760605457080263,\n \"acc_stderr\": 0.0046702081285792335,\n \"acc_norm\": 0.8681537542322246,\n \"acc_norm_stderr\": 0.0033763209559167064\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8157894736842105,\n \"acc_stderr\": 0.0315469804508223,\n \"acc_norm\": 0.8157894736842105,\n \"acc_norm_stderr\": 0.0315469804508223\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768081,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768081\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7245283018867924,\n \"acc_stderr\": 0.027495663683724057,\n \"acc_norm\": 0.7245283018867924,\n \"acc_norm_stderr\": 0.027495663683724057\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8472222222222222,\n \"acc_stderr\": 0.030085743248565663,\n \"acc_norm\": 0.8472222222222222,\n \"acc_norm_stderr\": 0.030085743248565663\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.035149425512674394,\n \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.035149425512674394\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6851063829787234,\n \"acc_stderr\": 0.030363582197238174,\n \"acc_norm\": 0.6851063829787234,\n \"acc_norm_stderr\": 0.030363582197238174\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6413793103448275,\n \"acc_stderr\": 0.039966295748767186,\n \"acc_norm\": 0.6413793103448275,\n \"acc_norm_stderr\": 0.039966295748767186\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.025634258115554955,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.025634258115554955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n \"acc_stderr\": 0.021732540689329286,\n \"acc_norm\": 0.8225806451612904,\n \"acc_norm_stderr\": 0.021732540689329286\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.022390787638216773,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.022390787638216773\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.927461139896373,\n \"acc_stderr\": 0.018718998520678175,\n \"acc_norm\": 0.927461139896373,\n \"acc_norm_stderr\": 0.018718998520678175\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7076923076923077,\n \"acc_stderr\": 0.02306043838085774,\n \"acc_norm\": 0.7076923076923077,\n \"acc_norm_stderr\": 0.02306043838085774\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7773109243697479,\n \"acc_stderr\": 0.02702543349888239,\n \"acc_norm\": 0.7773109243697479,\n \"acc_norm_stderr\": 0.02702543349888239\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8880733944954129,\n \"acc_stderr\": 0.013517352714958792,\n \"acc_norm\": 0.8880733944954129,\n \"acc_norm_stderr\": 0.013517352714958792\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6157407407407407,\n \"acc_stderr\": 0.03317354514310742,\n \"acc_norm\": 0.6157407407407407,\n \"acc_norm_stderr\": 0.03317354514310742\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9313725490196079,\n \"acc_stderr\": 0.017744453647073315,\n \"acc_norm\": 0.9313725490196079,\n \"acc_norm_stderr\": 0.017744453647073315\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8776371308016878,\n \"acc_stderr\": 0.021331741829746793,\n \"acc_norm\": 0.8776371308016878,\n \"acc_norm_stderr\": 0.021331741829746793\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7937219730941704,\n \"acc_stderr\": 0.02715715047956382,\n \"acc_norm\": 0.7937219730941704,\n \"acc_norm_stderr\": 0.02715715047956382\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8854961832061069,\n \"acc_stderr\": 0.027927473753597446,\n \"acc_norm\": 0.8854961832061069,\n \"acc_norm_stderr\": 0.027927473753597446\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8842975206611571,\n \"acc_stderr\": 0.02919980245562281,\n \"acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.02919980245562281\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8220858895705522,\n \"acc_stderr\": 0.03004735765580663,\n \"acc_norm\": 0.8220858895705522,\n \"acc_norm_stderr\": 0.03004735765580663\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5446428571428571,\n \"acc_stderr\": 0.04726835553719098,\n \"acc_norm\": 0.5446428571428571,\n \"acc_norm_stderr\": 0.04726835553719098\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822582,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822582\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8697318007662835,\n \"acc_stderr\": 0.012036729568216052,\n \"acc_norm\": 0.8697318007662835,\n \"acc_norm_stderr\": 0.012036729568216052\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7716763005780347,\n \"acc_stderr\": 0.022598703804321628,\n \"acc_norm\": 0.7716763005780347,\n \"acc_norm_stderr\": 0.022598703804321628\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5698324022346368,\n \"acc_stderr\": 0.01655860163604103,\n \"acc_norm\": 0.5698324022346368,\n \"acc_norm_stderr\": 0.01655860163604103\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.024288619466046105,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.024288619466046105\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8209876543209876,\n \"acc_stderr\": 0.02133086876212706,\n \"acc_norm\": 0.8209876543209876,\n \"acc_norm_stderr\": 0.02133086876212706\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5425531914893617,\n \"acc_stderr\": 0.029719281272236834,\n \"acc_norm\": 0.5425531914893617,\n \"acc_norm_stderr\": 0.029719281272236834\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5541069100391134,\n \"acc_stderr\": 0.012695244711379783,\n \"acc_norm\": 0.5541069100391134,\n \"acc_norm_stderr\": 0.012695244711379783\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.026303648393696036,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.026303648393696036\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.017160587235046352,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.017160587235046352\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8040816326530612,\n \"acc_stderr\": 0.025409301953225678,\n \"acc_norm\": 0.8040816326530612,\n \"acc_norm_stderr\": 0.025409301953225678\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.0266405825391332,\n \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.0266405825391332\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4785801713586291,\n \"mc1_stderr\": 0.01748743214471181,\n \"mc2\": 0.6521377907591408,\n \"mc2_stderr\": 0.014862743520821054\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8358326756116812,\n \"acc_stderr\": 0.0104108497752228\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6178923426838514,\n \"acc_stderr\": 0.013384173935648494\n }\n}\n```", "repo_url": "https://huggingface.co/gradientai/v-alpha-tross", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|arc:challenge|25_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|arc:challenge|25_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|gsm8k|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|gsm8k|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hellaswag|10_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hellaswag|10_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T13-42-12.602849.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-25T10-09-36.875162.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["**/details_harness|winogrande|5_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["**/details_harness|winogrande|5_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-25T10-09-36.875162.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T13_42_12.602849", "path": ["results_2024-01-24T13-42-12.602849.parquet"]}, {"split": "2024_01_25T10_09_36.875162", "path": ["results_2024-01-25T10-09-36.875162.parquet"]}, {"split": "latest", "path": ["results_2024-01-25T10-09-36.875162.parquet"]}]}]}
2024-01-25T10:12:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of gradientai/v-alpha-tross Dataset automatically created during the evaluation run of model gradientai/v-alpha-tross on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-25T10:09:36.875162(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of gradientai/v-alpha-tross\n\n\n\nDataset automatically created during the evaluation run of model gradientai/v-alpha-tross on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-25T10:09:36.875162(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of gradientai/v-alpha-tross\n\n\n\nDataset automatically created during the evaluation run of model gradientai/v-alpha-tross on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-25T10:09:36.875162(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
b4949356ddd015e56fbc9e611846422299545712
# Dataset Card for TowerBlocks TowerBlocks is the dataset used to train [TowerInstruct-v0.1](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), a language model specialized for translation tasks such as machine translation (e.g. general, document, terminology-aware or context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. - **Curated by:** Unbabel, Instituto Superior Técnico, CentraleSupélec, University of Paris-Saclay; - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian; - **License:** TowerBlocks contains data from many sources. We refer to the respective data sources below for information regarding licensing of the data. **Update from v0.1:** The only change from TowerBlocks-v0.1 to TowerBlocks-v0.2 is the preprocessing of the document-level translation. Models trained on TowerBlocks-v0.2 will be able to handle translation of different paragraphs (separated by a new-line separator) better than models trained on the v0.1 version. ## Dataset Details TowerBlocks is a conversational dataset for translation related tasks created from a diverse set of high quality data sources: | Data Source | Task(s) | | -------------- | ----------- | | [WMT14 to WMT21](https://www.statmt.org/wmt22/results.html) | General Translation | | [WMT22](https://github.com/microsoft/gpt-MT) | Few-shot General Translation w/ Quality Shots | | [NTREX](https://github.com/MicrosoftTranslator/NTREX) | General Translation | | [Flores Dev](https://github.com/facebookresearch/flores) | General Translation | | [FRMT](https://github.com/google-research/google-research/tree/master/frmt) | General Translation | | [QT21](https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390) | General Translation, Automatic Post Edition | | [ApeQuest](https://apequest.wordpress.com/) | General Translation, Automatic Post Edition | | [OPUS (Quality Filtered)](https://opus.nlpl.eu/) | General Translation | | [MT-GenEval](https://github.com/amazon-science/machine-translation-gender-eval) | General Translation, Context-Aware Translation | | [WMT20 to WMT22 Metrics MQM](https://www.statmt.org/wmt22/results.html) | Machine Translation Evaluation | | [WMT17 to WMT22 Metrics Direct Assessments](https://www.statmt.org/wmt22/results.html) | Machine Translation Evaluation | | [WMT21 Terminology Dev (filtered)](https://www.statmt.org/wmt21/terminology-task.html) | Terminology-aware Translation | | [Tatoeba Dev (filtered)](https://github.com/Helsinki-NLP/Tatoeba-Challenge) | Multi-reference Translation | | [MultiCoNER 2022 and 2023 Dev](https://registry.opendata.aws/multiconer/) | Named-entity Recognition | | [PAWS-X Dev](https://github.com/google-research-datasets/paws) | Paraphrase Generation | | [UltraChat 200k (filtered)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | Synthetic Chat data | | [Glaive Code Assistant (filtered)](https://huggingface.co/datasets/glaiveai/glaive-code-assistant) | Code instructions | The dataset was built by generating user instructions with records from each data source using a set of zero- and few-shot templates (with the exception of UltraChat 200k and Glaive Code Assistant which already contain user instructions). ### Dataset features * `conversations` - The user and assistant dialog turns; * `dataset` - Original dataset for the record; * `lang` - Either the language or language pair of the original dataset; * `task` - Task for the record (Can be used to identify the training templates for each task); * `split` - Split of the original dataset from which the record was taken. ## Intended uses and limitations TowerBlocks is intended for specializing language models towards translation related tasks via supervised finetuning. ## Citation To be completed.
Unbabel/TowerBlocks-v0.2
[ "task_categories:conversational", "size_categories:100K<n<1M", "language:en", "language:de", "language:fr", "language:zh", "language:pt", "language:nl", "language:ru", "language:ko", "language:it", "language:es", "region:us" ]
2024-01-24T13:59:56+00:00
{"language": ["en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es"], "size_categories": ["100K<n<1M"], "task_categories": ["conversational"], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "lang", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "dataset", "dtype": "string"}, {"name": "task", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1569630906, "num_examples": 637563}], "download_size": 730923832, "dataset_size": 1569630906}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-11T12:08:35+00:00
[]
[ "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es" ]
TAGS #task_categories-conversational #size_categories-100K<n<1M #language-English #language-German #language-French #language-Chinese #language-Portuguese #language-Dutch #language-Russian #language-Korean #language-Italian #language-Spanish #region-us
Dataset Card for TowerBlocks ============================ TowerBlocks is the dataset used to train TowerInstruct-v0.1, a language model specialized for translation tasks such as machine translation (e.g. general, document, terminology-aware or context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. * Curated by: Unbabel, Instituto Superior Técnico, CentraleSupélec, University of Paris-Saclay; * Language(s) (NLP): English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian; * License: TowerBlocks contains data from many sources. We refer to the respective data sources below for information regarding licensing of the data. Update from v0.1: The only change from TowerBlocks-v0.1 to TowerBlocks-v0.2 is the preprocessing of the document-level translation. Models trained on TowerBlocks-v0.2 will be able to handle translation of different paragraphs (separated by a new-line separator) better than models trained on the v0.1 version. Dataset Details --------------- TowerBlocks is a conversational dataset for translation related tasks created from a diverse set of high quality data sources: The dataset was built by generating user instructions with records from each data source using a set of zero- and few-shot templates (with the exception of UltraChat 200k and Glaive Code Assistant which already contain user instructions). ### Dataset features * 'conversations' - The user and assistant dialog turns; * 'dataset' - Original dataset for the record; * 'lang' - Either the language or language pair of the original dataset; * 'task' - Task for the record (Can be used to identify the training templates for each task); * 'split' - Split of the original dataset from which the record was taken. Intended uses and limitations ----------------------------- TowerBlocks is intended for specializing language models towards translation related tasks via supervised finetuning. To be completed.
[ "### Dataset features\n\n\n* 'conversations' - The user and assistant dialog turns;\n* 'dataset' - Original dataset for the record;\n* 'lang' - Either the language or language pair of the original dataset;\n* 'task' - Task for the record (Can be used to identify the training templates for each task);\n* 'split' - Split of the original dataset from which the record was taken.\n\n\nIntended uses and limitations\n-----------------------------\n\n\nTowerBlocks is intended for specializing language models towards translation related tasks via supervised finetuning.\n\n\nTo be completed." ]
[ "TAGS\n#task_categories-conversational #size_categories-100K<n<1M #language-English #language-German #language-French #language-Chinese #language-Portuguese #language-Dutch #language-Russian #language-Korean #language-Italian #language-Spanish #region-us \n", "### Dataset features\n\n\n* 'conversations' - The user and assistant dialog turns;\n* 'dataset' - Original dataset for the record;\n* 'lang' - Either the language or language pair of the original dataset;\n* 'task' - Task for the record (Can be used to identify the training templates for each task);\n* 'split' - Split of the original dataset from which the record was taken.\n\n\nIntended uses and limitations\n-----------------------------\n\n\nTowerBlocks is intended for specializing language models towards translation related tasks via supervised finetuning.\n\n\nTo be completed." ]
359e44c0e5d656ad4fe91bc9501f0dbfbeed12e5
# Logikon Bench Collection of high quality datasets to evaluate LLM's reasoning abilities. Compared to the original versions, the datasets have been checked for consistency; buggy examples have been removed. In addition, the English logiqa dataset is an entirely new translation of the orginal Chinese dataset. The subdatasets are made available in accordance with the original licenses: * LSAT: MIT License Link: https://github.com/zhongwanjun/AR-LSAT * LogiQA: CC BY-NC-SA 4.0 Link: https://github.com/lgw863/LogiQA-dataset * LogiQA 2.0: CC BY-NC-SA 4.0 Link: https://github.com/csitfun/LogiQA2.0
logikon/logikon-bench
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:other", "region:us" ]
2024-01-24T14:22:39+00:00
{"language": ["en"], "license": "other", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "configs": [{"config_name": "logiqa", "data_files": [{"split": "test", "path": "data/AGIEval/logiqa-en.jsonl"}]}, {"config_name": "lsat-ar", "data_files": [{"split": "test", "path": "data/AGIEval/lsat-ar.jsonl"}]}, {"config_name": "lsat-lr", "data_files": [{"split": "test", "path": "data/AGIEval/lsat-lr.jsonl"}]}, {"config_name": "lsat-rc", "data_files": [{"split": "test", "path": "data/AGIEval/lsat-rc.jsonl"}]}, {"config_name": "logiqa2", "data_files": [{"split": "test", "path": "data/LogiQA20/logiqa_20_en.jsonl"}]}]}
2024-01-24T14:47:19+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-1K<n<10K #language-English #license-other #region-us
# Logikon Bench Collection of high quality datasets to evaluate LLM's reasoning abilities. Compared to the original versions, the datasets have been checked for consistency; buggy examples have been removed. In addition, the English logiqa dataset is an entirely new translation of the orginal Chinese dataset. The subdatasets are made available in accordance with the original licenses: * LSAT: MIT License Link: URL * LogiQA: CC BY-NC-SA 4.0 Link: URL * LogiQA 2.0: CC BY-NC-SA 4.0 Link: URL
[ "# Logikon Bench\n\nCollection of high quality datasets to evaluate LLM's reasoning abilities.\n\nCompared to the original versions, the datasets have been checked for consistency; buggy examples have been removed.\nIn addition, the English logiqa dataset is an entirely new translation of the orginal Chinese dataset.\n\nThe subdatasets are made available in accordance with the original licenses:\n\n* LSAT: MIT License\n Link: URL\n* LogiQA: CC BY-NC-SA 4.0\n Link: URL\n* LogiQA 2.0: CC BY-NC-SA 4.0\n Link: URL" ]
[ "TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-other #region-us \n", "# Logikon Bench\n\nCollection of high quality datasets to evaluate LLM's reasoning abilities.\n\nCompared to the original versions, the datasets have been checked for consistency; buggy examples have been removed.\nIn addition, the English logiqa dataset is an entirely new translation of the orginal Chinese dataset.\n\nThe subdatasets are made available in accordance with the original licenses:\n\n* LSAT: MIT License\n Link: URL\n* LogiQA: CC BY-NC-SA 4.0\n Link: URL\n* LogiQA 2.0: CC BY-NC-SA 4.0\n Link: URL" ]
458f86122c8d9cf9e2c008553c5e5caa053de40f
# Dataset Card for Dataset Name This dataset aims to teach LLMs relative positioning (e.g. above, left from, below, etc.), which in my findings most LLMs, even SOTA where not able to produce under all circumstances. Will be pushing a fine-tuned mixtral-7x8B with this dataset. ## Dataset Details ### Dataset Description Contains Data for relative positioning on a grid(256, 256). Assumes Origin [0, 0] is in the bottom left. Two Objects (Object 1, Object 2) are randomly created. Answer is there relative position to one another. - **Curated by:** [Antoine Angert] - **Language(s) (NLP):** [English] - **License:** [apache-2.0] ## Uses ### Direct Use Can be used to fine-tune Language Models. (Althought so far not been tested, will update) ## Dataset Structure Features: Prompt(String), Response(String) ## Dataset Creation ### Curation Rationale I did some testing to see how well LLMs are able to handle positional data(2D, 3D). I found that most small models (tested: llama-7B, llama-13B, mistral-7B) have very poor positional understanding. Most bigger Models (tested: gpt-3.5-turbo, gpt-4, llama-70B, mixtral-7x8B) have a fairly good positional understanding, as long as no other context is provided. When I tried using positional reasoning with some other unrelated context, the performance of these bigger models dropped imensly. This is my first attempt of trying to embed this understanding directly into the models and not throught context. #### Data Collection and Processing The dataset was generated using a python script. ## Dataset Card Authors [optional] Antoine Angert ## Dataset Card Contact Contact under: [email protected]
LlameUser/relative-positioning
[ "task_categories:feature-extraction", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "code", "region:us" ]
2024-01-24T14:49:54+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["feature-extraction"], "pretty_name": "relative-positioning", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 916486, "num_examples": 10000}], "download_size": 164700, "dataset_size": 916486}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["code"]}
2024-01-25T09:31:20+00:00
[]
[ "en" ]
TAGS #task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-apache-2.0 #code #region-us
# Dataset Card for Dataset Name This dataset aims to teach LLMs relative positioning (e.g. above, left from, below, etc.), which in my findings most LLMs, even SOTA where not able to produce under all circumstances. Will be pushing a fine-tuned mixtral-7x8B with this dataset. ## Dataset Details ### Dataset Description Contains Data for relative positioning on a grid(256, 256). Assumes Origin [0, 0] is in the bottom left. Two Objects (Object 1, Object 2) are randomly created. Answer is there relative position to one another. - Curated by: [Antoine Angert] - Language(s) (NLP): [English] - License: [apache-2.0] ## Uses ### Direct Use Can be used to fine-tune Language Models. (Althought so far not been tested, will update) ## Dataset Structure Features: Prompt(String), Response(String) ## Dataset Creation ### Curation Rationale I did some testing to see how well LLMs are able to handle positional data(2D, 3D). I found that most small models (tested: llama-7B, llama-13B, mistral-7B) have very poor positional understanding. Most bigger Models (tested: gpt-3.5-turbo, gpt-4, llama-70B, mixtral-7x8B) have a fairly good positional understanding, as long as no other context is provided. When I tried using positional reasoning with some other unrelated context, the performance of these bigger models dropped imensly. This is my first attempt of trying to embed this understanding directly into the models and not throught context. #### Data Collection and Processing The dataset was generated using a python script. ## Dataset Card Authors [optional] Antoine Angert ## Dataset Card Contact Contact under: URL@URL
[ "# Dataset Card for Dataset Name\n\nThis dataset aims to teach LLMs relative positioning (e.g. above, left from, below, etc.), \nwhich in my findings most LLMs, even SOTA where not able to produce under all circumstances.\nWill be pushing a fine-tuned mixtral-7x8B with this dataset.", "## Dataset Details", "### Dataset Description\n\nContains Data for relative positioning on a grid(256, 256).\nAssumes Origin [0, 0] is in the bottom left.\nTwo Objects (Object 1, Object 2) are randomly created.\nAnswer is there relative position to one another.\n\n- Curated by: [Antoine Angert]\n- Language(s) (NLP): [English]\n- License: [apache-2.0]", "## Uses", "### Direct Use\n\nCan be used to fine-tune Language Models.\n(Althought so far not been tested, will update)", "## Dataset Structure\n\nFeatures:\nPrompt(String), Response(String)", "## Dataset Creation", "### Curation Rationale\n\nI did some testing to see how well LLMs are able to handle positional data(2D, 3D).\nI found that most small models (tested: llama-7B, llama-13B, mistral-7B) have very poor positional understanding.\nMost bigger Models (tested: gpt-3.5-turbo, gpt-4, llama-70B, mixtral-7x8B) have a fairly good positional understanding, as long as no other context is provided.\nWhen I tried using positional reasoning with some other unrelated context, the performance of these bigger models dropped imensly.\nThis is my first attempt of trying to embed this understanding directly into the models and not throught context.", "#### Data Collection and Processing\n\nThe dataset was generated using a python script.", "## Dataset Card Authors [optional]\n\nAntoine Angert", "## Dataset Card Contact\n\nContact under:\nURL@URL" ]
[ "TAGS\n#task_categories-feature-extraction #size_categories-10K<n<100K #language-English #license-apache-2.0 #code #region-us \n", "# Dataset Card for Dataset Name\n\nThis dataset aims to teach LLMs relative positioning (e.g. above, left from, below, etc.), \nwhich in my findings most LLMs, even SOTA where not able to produce under all circumstances.\nWill be pushing a fine-tuned mixtral-7x8B with this dataset.", "## Dataset Details", "### Dataset Description\n\nContains Data for relative positioning on a grid(256, 256).\nAssumes Origin [0, 0] is in the bottom left.\nTwo Objects (Object 1, Object 2) are randomly created.\nAnswer is there relative position to one another.\n\n- Curated by: [Antoine Angert]\n- Language(s) (NLP): [English]\n- License: [apache-2.0]", "## Uses", "### Direct Use\n\nCan be used to fine-tune Language Models.\n(Althought so far not been tested, will update)", "## Dataset Structure\n\nFeatures:\nPrompt(String), Response(String)", "## Dataset Creation", "### Curation Rationale\n\nI did some testing to see how well LLMs are able to handle positional data(2D, 3D).\nI found that most small models (tested: llama-7B, llama-13B, mistral-7B) have very poor positional understanding.\nMost bigger Models (tested: gpt-3.5-turbo, gpt-4, llama-70B, mixtral-7x8B) have a fairly good positional understanding, as long as no other context is provided.\nWhen I tried using positional reasoning with some other unrelated context, the performance of these bigger models dropped imensly.\nThis is my first attempt of trying to embed this understanding directly into the models and not throught context.", "#### Data Collection and Processing\n\nThe dataset was generated using a python script.", "## Dataset Card Authors [optional]\n\nAntoine Angert", "## Dataset Card Contact\n\nContact under:\nURL@URL" ]
c3b7453478d6e0c8df07b4fdce483548a3f972d0
Version of https://huggingface.co/datasets/dennlinger/klexikon which can be useful for Direct Preference Optimization of large language models generating sentences in simple german.
udkai/klexikon_dpo
[ "language:de", "license:cc-by-sa-4.0", "simple-german", "dpo", "language simplification", "region:us" ]
2024-01-24T15:06:06+00:00
{"language": ["de"], "license": "cc-by-sa-4.0", "pretty_name": "Kinder Lexikon Direct Preference Optimization Dataset", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7156517, "num_examples": 2893}], "download_size": 4334446, "dataset_size": 7156517}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["simple-german", "dpo", "language simplification"]}
2024-01-24T16:21:31+00:00
[]
[ "de" ]
TAGS #language-German #license-cc-by-sa-4.0 #simple-german #dpo #language simplification #region-us
Version of URL which can be useful for Direct Preference Optimization of large language models generating sentences in simple german.
[]
[ "TAGS\n#language-German #license-cc-by-sa-4.0 #simple-german #dpo #language simplification #region-us \n" ]
91f417b39d4638adb200a0a08afe86cf507e2558
There's images from wikiart compiled into zip, and are in webp format. Enjoy! Hope you have a nice day, fellow seeker <3
pipyp/wikiart-captioned-p1
[ "region:us" ]
2024-01-24T15:13:03+00:00
{}
2024-01-24T17:24:43+00:00
[]
[]
TAGS #region-us
There's images from wikiart compiled into zip, and are in webp format. Enjoy! Hope you have a nice day, fellow seeker <3
[]
[ "TAGS\n#region-us \n" ]
b0fe226483827efe7fc525b5170d6220cfef2b7f
# Dataset Card for "self-reward-dev1706109551" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vwxyzjn/self-reward-dev1706109551
[ "region:us" ]
2024-01-24T15:21:25+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "candidate_0", "dtype": "string"}, {"name": "score_0", "dtype": "float64"}, {"name": "candidate_1", "dtype": "string"}, {"name": "score_1", "dtype": "float64"}, {"name": "candidate_2", "dtype": "string"}, {"name": "score_2", "dtype": "float64"}, {"name": "candidate_3", "dtype": "string"}, {"name": "score_3", "dtype": "float64"}, {"name": "chosen", "dtype": "string"}, {"name": "chosen_score", "dtype": "float64"}, {"name": "chosen_idx", "dtype": "int64"}, {"name": "rejected", "dtype": "string"}, {"name": "rejected_score", "dtype": "float64"}, {"name": "rejected_idx", "dtype": "int64"}], "splits": [{"name": "train_sft_sft", "num_bytes": 52732, "num_examples": 4}, {"name": "test_sft_sft", "num_bytes": 36864, "num_examples": 4}, {"name": "train_gen_sft", "num_bytes": 56304, "num_examples": 4}, {"name": "test_gen_sft", "num_bytes": 46121, "num_examples": 4}], "download_size": 313107, "dataset_size": 192021}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft_sft", "path": "data/train_sft_sft-*"}, {"split": "test_sft_sft", "path": "data/test_sft_sft-*"}, {"split": "train_gen_sft", "path": "data/train_gen_sft-*"}, {"split": "test_gen_sft", "path": "data/test_gen_sft-*"}]}]}
2024-01-24T15:21:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "self-reward-dev1706109551" More Information needed
[ "# Dataset Card for \"self-reward-dev1706109551\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"self-reward-dev1706109551\"\n\nMore Information needed" ]
1d02f26b09beca565bc922392ac5d8a346199545
# Dataset Card for "Yann_LeCun_Tweets" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
gmongaras/Yann_LeCun_Tweets
[ "region:us" ]
2024-01-24T15:21:29+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "likes", "dtype": "string"}, {"name": "reply", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 102789, "num_examples": 406}], "download_size": 72647, "dataset_size": 102789}}
2024-01-24T15:26:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Yann_LeCun_Tweets" More Information needed
[ "# Dataset Card for \"Yann_LeCun_Tweets\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Yann_LeCun_Tweets\"\n\nMore Information needed" ]
c858b254c939853613f0e3b7b6ac390510d6ef9c
# MILQA Hungarian question-answer benchmark database MILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following: - Excerpts from high quality Wikipedia articles are used as context for the questions (free-to-use texts, free-to-use language database). - It contains factual (not opinion) questions. - Includes questions that are not answered in the text. - The (shortest possible) answer to the question (if any) is indicated in the original text. - When formulating the questions, we have paraphrased the original text, so that in most cases the answer cannot be found by lexical search. - The questions are formulated in such a way that they are not only meaningful in the context of the text, but also stand on their own (e.g., they do not contain pronouns.) Compared to SQUAD, the following innovations have been introduced (thus the Hungarian question-answer database contains more difficult questions than the original; the specific types of questions detailed below are marked separately in the database): - There can be more than one short answer to a question in the context (list-type answers; this is natural for some questions (5-6% in the database), but SQUAD never has more than one answer). - In addition to the short answer, a long answer is also given, which includes all the circumstances relevant to answering the question (min. 1 sentence, often several sentences). - Includes yes/no questions (about 10%); here, in addition to the long answer, which includes all relevant circumstances, a yes/no answer is also given. - Unanswerable questions (about 30% of questions) are relevant questions related to the topic, not questions generated by substitution from answerable questions. - Includes questions that can be answered by counting or performing arithmetic operations (these are difficult for current models). - Some of the unanswerable questions are "tricky questions" where a large proportion of native speakers would read an answer from the text, often based on incorrect default assumptions. These cases have been marked separately, with the hypothetical answer given. The questions were created by 5 annotators under the supervision of the Language Technology Research Group at the Pázmány Péter Catholic University, using a web annotation environment also created during the project. The database currently contains more than 23500 questions, with 70.93% of the questions being answered in the text. For further details and some baseline models trained on MILQA, please refer to the publication below. Annotators worked according to the following guidelines: - Everyday questions should be asked. - It should not be trivial to find the answer. - Some questions may be rephrased - you can ask a question concerning the same passage or answer in two different ways. - Write half as many unanswerable questions as answerable ones. - You will have 12 options for answerable questions, but you don't have to use them all. - It is not necessary to give both short and long answers in every case (there may be no short answer). - If possible, use a short answer and make it as short as possible. - Only complete words can be selected as an answer. - There may be several short answers to a question (list). This is always the answer to questions such as who and when. In such cases, the short answer selector with the given number of short answers must be selected several times in succession and the answers marked in sequence. - If the answer appears more than once in the text, select the one that is in the context pertaining to the question. - Short and long answers, or answers to different questions, may overlap in the text. - About 10% should be Boolean (yes/no) questions. - For Boolean questions, select a text passage as the answer (short or long, it doesn't matter) and click on the answer to bring up further options where you can tick No or Yes - If the answer is not grammatically correct in the context of the question (e.g. a different case ending should be used for the predicate used in the question), then after selecting the answer, click on the answer and tick the box Different answer. Do the same if there is a spelling mistake in the original. - Why...? (cause, effect) questions should also be included - There are no word order restrictions. You do not necessarily have to start with a question word. - Whenever possible, rephrase questions so that they do not use the same words as in the text - as many grammatical twists, word order changes, word changes, synonyms as possible, while keeping the question natural - The question should be 'self-contained', i.e. it should not contain parts that can only be understood knowing the text, e.g. pronouns. - The questions do not need to be entered in the order of the location of the answers in the text. The order of the questions is irrelevant. - If it's a text about XY, you should put XY in each question, to make the question self-contained. But it is good to have some variation in the formulation of XY as far as possible. - For unanswerable questions, ask questions that come to mind when reading the text but are not addressed in the text. Ask a question that, at least for the whole of the passage, has no answer and does not follow from it. - The question can be complex or arithmetical: e.g., the answer must be calculated from two given pieces of data. In this case, check the Arithmetic checkbox. - With "why?" questions, you can often formulate a shorter or better answer to the question. You may want to write this in the Different answer box. - For a counting question (how many types...), after giving x short answers, write x in the other exact answer box and put Arithmetic in the box. - If one sentence some information that makes the next sentence meaningful, and the short answer to question is in sentence 2, both sentences should be included in the long answer. - Long answers should always be at least complete clauses, but preferably complete sentences or multiple complete sentences: they should contain all information relevant to the question. - If a particular passage is very ungrammatical or sounds wrong, do NOT add questions to it, leave it out. - If there are factual errors or self-contradictions in the text, do not enter questions concerning those parts. ## Format The database is stored as json data files. Its format is based on the format of SQuAD 2.0. However, there are lists of long and short answers (values of the keys "short" and "long"), each answer may have a "modanswer", and a special "type". Question type "qtype" is aggregated from the type feature of answers belonging to the question. ## Publication If you use MILQA or any models trained on it, please cite the following publication. If you train a model on MILQA, please include the following publication among the ones to be cited. Attila Novák; Borbála Novák; Tamás Zombori; Gergő Szabó; Zsolt Szántó; Richárd Farkas A Question Answering Benchmark Database for Hungarian In: Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII) Stroudsburg (PA), USA: Association for Computational Linguistics (2023) pp. 188-198., 11 p. ``` @inproceedings{novak-etal-2023-question, title = "A Question Answering Benchmark Database for {H}ungarian", author = "Nov{\'a}k, Attila and Nov{\'a}k, Borb{\'a}la and Zombori, Tam{\'a}s and Szab{\'o}, Gerg{\H{o}} and Sz{\'a}nt{\'o}, Zsolt and Farkas, Rich{\'a}rd", booktitle = "Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.law-1.19", doi = "10.18653/v1/2023.law-1.19", pages = "188--198", } ```
SzegedAI/MILQA
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:extractive-qa", "size_categories:10K<n<100K", "language:hu", "license:cc-by-sa-4.0", "region:us" ]
2024-01-24T15:40:03+00:00
{"language": ["hu"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"], "viewer": false}
2024-01-25T08:36:18+00:00
[]
[ "hu" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #size_categories-10K<n<100K #language-Hungarian #license-cc-by-sa-4.0 #region-us
# MILQA Hungarian question-answer benchmark database MILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following: - Excerpts from high quality Wikipedia articles are used as context for the questions (free-to-use texts, free-to-use language database). - It contains factual (not opinion) questions. - Includes questions that are not answered in the text. - The (shortest possible) answer to the question (if any) is indicated in the original text. - When formulating the questions, we have paraphrased the original text, so that in most cases the answer cannot be found by lexical search. - The questions are formulated in such a way that they are not only meaningful in the context of the text, but also stand on their own (e.g., they do not contain pronouns.) Compared to SQUAD, the following innovations have been introduced (thus the Hungarian question-answer database contains more difficult questions than the original; the specific types of questions detailed below are marked separately in the database): - There can be more than one short answer to a question in the context (list-type answers; this is natural for some questions (5-6% in the database), but SQUAD never has more than one answer). - In addition to the short answer, a long answer is also given, which includes all the circumstances relevant to answering the question (min. 1 sentence, often several sentences). - Includes yes/no questions (about 10%); here, in addition to the long answer, which includes all relevant circumstances, a yes/no answer is also given. - Unanswerable questions (about 30% of questions) are relevant questions related to the topic, not questions generated by substitution from answerable questions. - Includes questions that can be answered by counting or performing arithmetic operations (these are difficult for current models). - Some of the unanswerable questions are "tricky questions" where a large proportion of native speakers would read an answer from the text, often based on incorrect default assumptions. These cases have been marked separately, with the hypothetical answer given. The questions were created by 5 annotators under the supervision of the Language Technology Research Group at the Pázmány Péter Catholic University, using a web annotation environment also created during the project. The database currently contains more than 23500 questions, with 70.93% of the questions being answered in the text. For further details and some baseline models trained on MILQA, please refer to the publication below. Annotators worked according to the following guidelines: - Everyday questions should be asked. - It should not be trivial to find the answer. - Some questions may be rephrased - you can ask a question concerning the same passage or answer in two different ways. - Write half as many unanswerable questions as answerable ones. - You will have 12 options for answerable questions, but you don't have to use them all. - It is not necessary to give both short and long answers in every case (there may be no short answer). - If possible, use a short answer and make it as short as possible. - Only complete words can be selected as an answer. - There may be several short answers to a question (list). This is always the answer to questions such as who and when. In such cases, the short answer selector with the given number of short answers must be selected several times in succession and the answers marked in sequence. - If the answer appears more than once in the text, select the one that is in the context pertaining to the question. - Short and long answers, or answers to different questions, may overlap in the text. - About 10% should be Boolean (yes/no) questions. - For Boolean questions, select a text passage as the answer (short or long, it doesn't matter) and click on the answer to bring up further options where you can tick No or Yes - If the answer is not grammatically correct in the context of the question (e.g. a different case ending should be used for the predicate used in the question), then after selecting the answer, click on the answer and tick the box Different answer. Do the same if there is a spelling mistake in the original. - Why...? (cause, effect) questions should also be included - There are no word order restrictions. You do not necessarily have to start with a question word. - Whenever possible, rephrase questions so that they do not use the same words as in the text - as many grammatical twists, word order changes, word changes, synonyms as possible, while keeping the question natural - The question should be 'self-contained', i.e. it should not contain parts that can only be understood knowing the text, e.g. pronouns. - The questions do not need to be entered in the order of the location of the answers in the text. The order of the questions is irrelevant. - If it's a text about XY, you should put XY in each question, to make the question self-contained. But it is good to have some variation in the formulation of XY as far as possible. - For unanswerable questions, ask questions that come to mind when reading the text but are not addressed in the text. Ask a question that, at least for the whole of the passage, has no answer and does not follow from it. - The question can be complex or arithmetical: e.g., the answer must be calculated from two given pieces of data. In this case, check the Arithmetic checkbox. - With "why?" questions, you can often formulate a shorter or better answer to the question. You may want to write this in the Different answer box. - For a counting question (how many types...), after giving x short answers, write x in the other exact answer box and put Arithmetic in the box. - If one sentence some information that makes the next sentence meaningful, and the short answer to question is in sentence 2, both sentences should be included in the long answer. - Long answers should always be at least complete clauses, but preferably complete sentences or multiple complete sentences: they should contain all information relevant to the question. - If a particular passage is very ungrammatical or sounds wrong, do NOT add questions to it, leave it out. - If there are factual errors or self-contradictions in the text, do not enter questions concerning those parts. ## Format The database is stored as json data files. Its format is based on the format of SQuAD 2.0. However, there are lists of long and short answers (values of the keys "short" and "long"), each answer may have a "modanswer", and a special "type". Question type "qtype" is aggregated from the type feature of answers belonging to the question. ## Publication If you use MILQA or any models trained on it, please cite the following publication. If you train a model on MILQA, please include the following publication among the ones to be cited. Attila Novák; Borbála Novák; Tamás Zombori; Gergő Szabó; Zsolt Szántó; Richárd Farkas A Question Answering Benchmark Database for Hungarian In: Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII) Stroudsburg (PA), USA: Association for Computational Linguistics (2023) pp. 188-198., 11 p.
[ "# MILQA Hungarian question-answer benchmark database\n\nMILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following:\n - Excerpts from high quality Wikipedia articles are used as context for the questions (free-to-use texts, free-to-use language database).\n - It contains factual (not opinion) questions.\n - Includes questions that are not answered in the text.\n - The (shortest possible) answer to the question (if any) is indicated in the original text.\n - When formulating the questions, we have paraphrased the original text, so that in most cases the answer cannot be found by lexical search.\n - The questions are formulated in such a way that they are not only meaningful in the context of the text, but also stand on their own (e.g., they do not contain pronouns.)\n\nCompared to SQUAD, the following innovations have been introduced (thus the Hungarian question-answer database contains more difficult questions than the original; the specific types of questions detailed below are marked separately in the database):\n - There can be more than one short answer to a question in the context (list-type answers; this is natural for some questions (5-6% in the database), but SQUAD never has more than one answer).\n - In addition to the short answer, a long answer is also given, which includes all the circumstances relevant to answering the question (min. 1 sentence, often several sentences).\n - Includes yes/no questions (about 10%); here, in addition to the long answer, which includes all relevant circumstances, a yes/no answer is also given.\n - Unanswerable questions (about 30% of questions) are relevant questions related to the topic, not questions generated by substitution from answerable questions.\n - Includes questions that can be answered by counting or performing arithmetic operations (these are difficult for current models).\n - Some of the unanswerable questions are \"tricky questions\" where a large proportion of native speakers would read an answer from the text, often based on incorrect default assumptions. These cases have been marked separately, with the hypothetical answer given.\n\nThe questions were created by 5 annotators under the supervision of the Language Technology Research Group at the Pázmány Péter Catholic University, using a web annotation environment also created during the project. The database currently contains more than 23500 questions, with 70.93% of the questions being answered in the text.\n\nFor further details and some baseline models trained on MILQA, please refer to the publication below.\n\nAnnotators worked according to the following guidelines:\n - Everyday questions should be asked.\n - It should not be trivial to find the answer.\n - Some questions may be rephrased - you can ask a question concerning the same passage or answer in two different ways.\n - Write half as many unanswerable questions as answerable ones.\n - You will have 12 options for answerable questions, but you don't have to use them all.\n - It is not necessary to give both short and long answers in every case (there may be no short answer).\n - If possible, use a short answer and make it as short as possible.\n - Only complete words can be selected as an answer.\n - There may be several short answers to a question (list). This is always the answer to questions such as who and when. In such cases, the short answer selector with the given number of short answers must be selected several times in succession and the answers marked in sequence.\n - If the answer appears more than once in the text, select the one that is in the context pertaining to the question.\n - Short and long answers, or answers to different questions, may overlap in the text.\n - About 10% should be Boolean (yes/no) questions.\n - For Boolean questions, select a text passage as the answer (short or long, it doesn't matter) and click on the answer to bring up further options where you can tick No or Yes\n - If the answer is not grammatically correct in the context of the question (e.g. a different case ending should be used for the predicate used in the question), then after selecting the answer, click on the answer and tick the box Different answer. Do the same if there is a spelling mistake in the original.\n - Why...? (cause, effect) questions should also be included\n - There are no word order restrictions. You do not necessarily have to start with a question word.\n - Whenever possible, rephrase questions so that they do not use the same words as in the text - as many grammatical twists, word order changes, word changes, synonyms as possible, while keeping the question natural\n - The question should be 'self-contained', i.e. it should not contain parts that can only be understood knowing the text, e.g. pronouns.\n - The questions do not need to be entered in the order of the location of the answers in the text. The order of the questions is irrelevant.\n - If it's a text about XY, you should put XY in each question, to make the question self-contained. But it is good to have some variation in the formulation of XY as far as possible.\n - For unanswerable questions, ask questions that come to mind when reading the text but are not addressed in the text. Ask a question that, at least for the whole of the passage, has no answer and does not follow from it.\n - The question can be complex or arithmetical: e.g., the answer must be calculated from two given pieces of data. In this case, check the Arithmetic checkbox.\n - With \"why?\" questions, you can often formulate a shorter or better answer to the question. You may want to write this in the Different answer box.\n - For a counting question (how many types...), after giving x short answers, write x in the other exact answer box and put Arithmetic in the box.\n - If one sentence some information that makes the next sentence meaningful, and the short answer to question is in sentence 2, both sentences should be included in the long answer.\n - Long answers should always be at least complete clauses, but preferably complete sentences or multiple complete sentences: they should contain all information relevant to the question.\n - If a particular passage is very ungrammatical or sounds wrong, do NOT add questions to it, leave it out.\n - If there are factual errors or self-contradictions in the text, do not enter questions concerning those parts.", "## Format\n\nThe database is stored as json data files. Its format is based on the format of SQuAD 2.0. However, there are lists of long and short answers (values of the keys \"short\" and \"long\"), each answer may have a \"modanswer\", and a special \"type\". Question type \"qtype\" is aggregated from the type feature of answers belonging to the question.", "## Publication\n\nIf you use MILQA or any models trained on it, please cite the following publication. If you train a model on MILQA, please include the following publication among the ones to be cited.\n\nAttila Novák; Borbála Novák; Tamás Zombori; Gergő Szabó; Zsolt Szántó; Richárd Farkas\nA Question Answering Benchmark Database for Hungarian\nIn: Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)\nStroudsburg (PA), USA: Association for Computational Linguistics (2023) pp. 188-198., 11 p." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #size_categories-10K<n<100K #language-Hungarian #license-cc-by-sa-4.0 #region-us \n", "# MILQA Hungarian question-answer benchmark database\n\nMILQA is a Hungarian machine reading comprehension, specifically, question answering (QA) benchmark database. In English, the most basic resource for the task is the Stanford Question Answering Dataset (SQuAD). The database was largely built following the principles of SQuAD 2.0, and is therefore characterized by the following:\n - Excerpts from high quality Wikipedia articles are used as context for the questions (free-to-use texts, free-to-use language database).\n - It contains factual (not opinion) questions.\n - Includes questions that are not answered in the text.\n - The (shortest possible) answer to the question (if any) is indicated in the original text.\n - When formulating the questions, we have paraphrased the original text, so that in most cases the answer cannot be found by lexical search.\n - The questions are formulated in such a way that they are not only meaningful in the context of the text, but also stand on their own (e.g., they do not contain pronouns.)\n\nCompared to SQUAD, the following innovations have been introduced (thus the Hungarian question-answer database contains more difficult questions than the original; the specific types of questions detailed below are marked separately in the database):\n - There can be more than one short answer to a question in the context (list-type answers; this is natural for some questions (5-6% in the database), but SQUAD never has more than one answer).\n - In addition to the short answer, a long answer is also given, which includes all the circumstances relevant to answering the question (min. 1 sentence, often several sentences).\n - Includes yes/no questions (about 10%); here, in addition to the long answer, which includes all relevant circumstances, a yes/no answer is also given.\n - Unanswerable questions (about 30% of questions) are relevant questions related to the topic, not questions generated by substitution from answerable questions.\n - Includes questions that can be answered by counting or performing arithmetic operations (these are difficult for current models).\n - Some of the unanswerable questions are \"tricky questions\" where a large proportion of native speakers would read an answer from the text, often based on incorrect default assumptions. These cases have been marked separately, with the hypothetical answer given.\n\nThe questions were created by 5 annotators under the supervision of the Language Technology Research Group at the Pázmány Péter Catholic University, using a web annotation environment also created during the project. The database currently contains more than 23500 questions, with 70.93% of the questions being answered in the text.\n\nFor further details and some baseline models trained on MILQA, please refer to the publication below.\n\nAnnotators worked according to the following guidelines:\n - Everyday questions should be asked.\n - It should not be trivial to find the answer.\n - Some questions may be rephrased - you can ask a question concerning the same passage or answer in two different ways.\n - Write half as many unanswerable questions as answerable ones.\n - You will have 12 options for answerable questions, but you don't have to use them all.\n - It is not necessary to give both short and long answers in every case (there may be no short answer).\n - If possible, use a short answer and make it as short as possible.\n - Only complete words can be selected as an answer.\n - There may be several short answers to a question (list). This is always the answer to questions such as who and when. In such cases, the short answer selector with the given number of short answers must be selected several times in succession and the answers marked in sequence.\n - If the answer appears more than once in the text, select the one that is in the context pertaining to the question.\n - Short and long answers, or answers to different questions, may overlap in the text.\n - About 10% should be Boolean (yes/no) questions.\n - For Boolean questions, select a text passage as the answer (short or long, it doesn't matter) and click on the answer to bring up further options where you can tick No or Yes\n - If the answer is not grammatically correct in the context of the question (e.g. a different case ending should be used for the predicate used in the question), then after selecting the answer, click on the answer and tick the box Different answer. Do the same if there is a spelling mistake in the original.\n - Why...? (cause, effect) questions should also be included\n - There are no word order restrictions. You do not necessarily have to start with a question word.\n - Whenever possible, rephrase questions so that they do not use the same words as in the text - as many grammatical twists, word order changes, word changes, synonyms as possible, while keeping the question natural\n - The question should be 'self-contained', i.e. it should not contain parts that can only be understood knowing the text, e.g. pronouns.\n - The questions do not need to be entered in the order of the location of the answers in the text. The order of the questions is irrelevant.\n - If it's a text about XY, you should put XY in each question, to make the question self-contained. But it is good to have some variation in the formulation of XY as far as possible.\n - For unanswerable questions, ask questions that come to mind when reading the text but are not addressed in the text. Ask a question that, at least for the whole of the passage, has no answer and does not follow from it.\n - The question can be complex or arithmetical: e.g., the answer must be calculated from two given pieces of data. In this case, check the Arithmetic checkbox.\n - With \"why?\" questions, you can often formulate a shorter or better answer to the question. You may want to write this in the Different answer box.\n - For a counting question (how many types...), after giving x short answers, write x in the other exact answer box and put Arithmetic in the box.\n - If one sentence some information that makes the next sentence meaningful, and the short answer to question is in sentence 2, both sentences should be included in the long answer.\n - Long answers should always be at least complete clauses, but preferably complete sentences or multiple complete sentences: they should contain all information relevant to the question.\n - If a particular passage is very ungrammatical or sounds wrong, do NOT add questions to it, leave it out.\n - If there are factual errors or self-contradictions in the text, do not enter questions concerning those parts.", "## Format\n\nThe database is stored as json data files. Its format is based on the format of SQuAD 2.0. However, there are lists of long and short answers (values of the keys \"short\" and \"long\"), each answer may have a \"modanswer\", and a special \"type\". Question type \"qtype\" is aggregated from the type feature of answers belonging to the question.", "## Publication\n\nIf you use MILQA or any models trained on it, please cite the following publication. If you train a model on MILQA, please include the following publication among the ones to be cited.\n\nAttila Novák; Borbála Novák; Tamás Zombori; Gergő Szabó; Zsolt Szántó; Richárd Farkas\nA Question Answering Benchmark Database for Hungarian\nIn: Proceedings of the 17th Linguistic Annotation Workshop (LAW-XVII)\nStroudsburg (PA), USA: Association for Computational Linguistics (2023) pp. 188-198., 11 p." ]
f48547f36f3bb09f036033dc355526cffd3ed9b5
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
salmasally/esg
[ "region:us" ]
2024-01-24T16:06:37+00:00
{}
2024-01-24T16:13:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
1846e7bc04e821cd5110eaf6bb5d82c9f6b58d0f
# SkoleGPT Instruct <!-- Provide a quick summary of the dataset. --> "skolegpt-instruct" is an open source dataset for Danish instruction fine-tuning of LLM's. The dataset is translation of a quality filtered subset of the OpenOrca instruction dataset. The project is a part of the SkoleGPT project. Find the code used to generate the dataset here: https://github.com/Kobenhavns-Professionshojskole/skolegpt-instruct-dataset ## Dataset Details ### Dataset Description - **Curated by:** Kasper Junge - **Funded by:** Københavns Professionshøjskole - **Shared by:** Københavns Professionshøjskole - **Language(s) (NLP):** Danish - **License:** MIT ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/Kobenhavns-Professionshojskole/skolegpt-instruct-dataset ## Dataset Creation ### Source Data The dataset is a translation of a filtered subset of (OpenOrca)[https://huggingface.co/datasets/Open-Orca/OpenOrca] #### Data Collection and Processing ##### Data Sampling The data extraction process involves loading and shuffling the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca), specifically the "1M-GPT4-Augmented.parquet" file. A specified number of entries are then selected to form a subset, which is organized into a DataFrame with an added "source" column for origin tracking. This results in a manageable and tailored subset of the dataset for analysis or further processing. ##### Filtering The filter_data function is designed to preprocess and filter the raw OpenOrca dataset. This process involves several steps, each targeting specific types of data or formatting issues within the dataset. Below is an outline of these steps: 1. **Remove Already Translated Instructions:** If translated examples already exists in the [kobprof/skolegpt-instruct](https://huggingface.co/datasets/kobprof/skolegpt-instruct) dataset on the Hugging Face hub, remove them from the dataset. 2. **Remove Translation Instructions:** Filters out entries containing the word "translate" in the "question" field, targeting instances that are likely to be translation instructions. 3. **Remove Common Prefixes and Postfixes:** Strips common prefixes and postfixes from the "question" field. This is achieved through regular expressions constructed from provided lists of common prefixes and postfixes. 4. **Remove Questions Ending with a Colon:** Filters out entries where the "question" field ends with a colon, as these often indicate incomplete or improperly formatted questions. 5. **Remove Multiple Choice Questions:** Identifies and removes multiple-choice questions. This is done using regular expressions to detect common multiple-choice question formats, such as options labeled with letters or numbers. 6. **Basic Cleaning:** Performs basic cleaning of the dataset by stripping characters from the "system_prompt", "question", and "response" fields and removing entries where "question" or "response" fields are empty. 7. **Remove Exotic Characters:** Filters out entries containing exotic characters in the "question" and "response" fields. The list of characters to filter is dynamically generated based on the dataset content. 8. **Remove Duplicate Questions and Responses:** Eliminates duplicates in the dataset, ensuring uniqueness in both "question" and "response" fields. ##### Translation The dataset translation is carried out via the DeepL service. This process necessitates having a DeepL account with a linked credit card. DeepL provides a free tier, allowing access to their API for translating up to 500,000 characters, which can be found [here](https://support.deepl.com/hc/en-us/articles/360021200939-DeepL-API-Free). There are approximately 16 unique system prompts consistently used throughout all instructions. By translating only these unique system prompts instead of translating them for each row, we can significantly conserve character usage. ##### Survey Instructions The dataset also contains instructions collected from a survey carried output the SkoleGPT. Each instruction is paried with a GPT-4 response. ## Dataset Card Authors - Kasper Junge
kobprof/skolegpt-instruct
[ "language:da", "region:us" ]
2024-01-24T16:15:41+00:00
{"language": ["da"], "pretty_name": "SkoleGPT Instruction Dataset", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37228526.0, "num_examples": 21580}], "download_size": 20566256, "dataset_size": 37228526.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-29T13:09:52+00:00
[]
[ "da" ]
TAGS #language-Danish #region-us
# SkoleGPT Instruct "skolegpt-instruct" is an open source dataset for Danish instruction fine-tuning of LLM's. The dataset is translation of a quality filtered subset of the OpenOrca instruction dataset. The project is a part of the SkoleGPT project. Find the code used to generate the dataset here: URL ## Dataset Details ### Dataset Description - Curated by: Kasper Junge - Funded by: Københavns Professionshøjskole - Shared by: Københavns Professionshøjskole - Language(s) (NLP): Danish - License: MIT ### Dataset Sources [optional] - Repository: URL ## Dataset Creation ### Source Data The dataset is a translation of a filtered subset of (OpenOrca)[URL #### Data Collection and Processing ##### Data Sampling The data extraction process involves loading and shuffling the OpenOrca dataset, specifically the "1M-GPT4-Augmented.parquet" file. A specified number of entries are then selected to form a subset, which is organized into a DataFrame with an added "source" column for origin tracking. This results in a manageable and tailored subset of the dataset for analysis or further processing. ##### Filtering The filter_data function is designed to preprocess and filter the raw OpenOrca dataset. This process involves several steps, each targeting specific types of data or formatting issues within the dataset. Below is an outline of these steps: 1. Remove Already Translated Instructions: If translated examples already exists in the kobprof/skolegpt-instruct dataset on the Hugging Face hub, remove them from the dataset. 2. Remove Translation Instructions: Filters out entries containing the word "translate" in the "question" field, targeting instances that are likely to be translation instructions. 3. Remove Common Prefixes and Postfixes: Strips common prefixes and postfixes from the "question" field. This is achieved through regular expressions constructed from provided lists of common prefixes and postfixes. 4. Remove Questions Ending with a Colon: Filters out entries where the "question" field ends with a colon, as these often indicate incomplete or improperly formatted questions. 5. Remove Multiple Choice Questions: Identifies and removes multiple-choice questions. This is done using regular expressions to detect common multiple-choice question formats, such as options labeled with letters or numbers. 6. Basic Cleaning: Performs basic cleaning of the dataset by stripping characters from the "system_prompt", "question", and "response" fields and removing entries where "question" or "response" fields are empty. 7. Remove Exotic Characters: Filters out entries containing exotic characters in the "question" and "response" fields. The list of characters to filter is dynamically generated based on the dataset content. 8. Remove Duplicate Questions and Responses: Eliminates duplicates in the dataset, ensuring uniqueness in both "question" and "response" fields. ##### Translation The dataset translation is carried out via the DeepL service. This process necessitates having a DeepL account with a linked credit card. DeepL provides a free tier, allowing access to their API for translating up to 500,000 characters, which can be found here. There are approximately 16 unique system prompts consistently used throughout all instructions. By translating only these unique system prompts instead of translating them for each row, we can significantly conserve character usage. ##### Survey Instructions The dataset also contains instructions collected from a survey carried output the SkoleGPT. Each instruction is paried with a GPT-4 response. ## Dataset Card Authors - Kasper Junge
[ "# SkoleGPT Instruct \n\n\n\n\"skolegpt-instruct\" is an open source dataset for Danish instruction fine-tuning of LLM's. The dataset is translation of a quality filtered subset of the OpenOrca instruction dataset. The project is a part of the SkoleGPT project. Find the code used to generate the dataset here: URL", "## Dataset Details", "### Dataset Description\n\n\n- Curated by: Kasper Junge\n- Funded by: Københavns Professionshøjskole\n- Shared by: Københavns Professionshøjskole\n- Language(s) (NLP): Danish\n- License: MIT", "### Dataset Sources [optional]\n\n\n\n- Repository: URL", "## Dataset Creation", "### Source Data\n\nThe dataset is a translation of a filtered subset of (OpenOrca)[URL", "#### Data Collection and Processing", "##### Data Sampling\nThe data extraction process involves loading and shuffling the OpenOrca dataset, specifically the \"1M-GPT4-Augmented.parquet\" file. A specified number of entries are then selected to form a subset, which is organized into a DataFrame with an added \"source\" column for origin tracking. This results in a manageable and tailored subset of the dataset for analysis or further processing.", "##### Filtering\nThe filter_data function is designed to preprocess and filter the raw OpenOrca dataset. This process involves several steps, each targeting specific types of data or formatting issues within the dataset. \n\nBelow is an outline of these steps:\n\n1. Remove Already Translated Instructions: If translated examples already exists in the kobprof/skolegpt-instruct dataset on the Hugging Face hub, remove them from the dataset.\n\n2. Remove Translation Instructions: Filters out entries containing the word \"translate\" in the \"question\" field, targeting instances that are likely to be translation instructions.\n\n3. Remove Common Prefixes and Postfixes: Strips common prefixes and postfixes from the \"question\" field. This is achieved through regular expressions constructed from provided lists of common prefixes and postfixes.\n\n4. Remove Questions Ending with a Colon: Filters out entries where the \"question\" field ends with a colon, as these often indicate incomplete or improperly formatted questions.\n\n5. Remove Multiple Choice Questions: Identifies and removes multiple-choice questions. This is done using regular expressions to detect common multiple-choice question formats, such as options labeled with letters or numbers.\n\n6. Basic Cleaning: Performs basic cleaning of the dataset by stripping characters from the \"system_prompt\", \"question\", and \"response\" fields and removing entries where \"question\" or \"response\" fields are empty.\n\n7. Remove Exotic Characters: Filters out entries containing exotic characters in the \"question\" and \"response\" fields. The list of characters to filter is dynamically generated based on the dataset content.\n\n8. Remove Duplicate Questions and Responses: Eliminates duplicates in the dataset, ensuring uniqueness in both \"question\" and \"response\" fields.", "##### Translation\nThe dataset translation is carried out via the DeepL service. This process necessitates having a DeepL account with a linked credit card. DeepL provides a free tier, allowing access to their API for translating up to 500,000 characters, which can be found here. There are approximately 16 unique system prompts consistently used throughout all instructions. By translating only these unique system prompts instead of translating them for each row, we can significantly conserve character usage.", "##### Survey Instructions\nThe dataset also contains instructions collected from a survey carried output the SkoleGPT. Each instruction is paried with a GPT-4 response.", "## Dataset Card Authors\n- Kasper Junge" ]
[ "TAGS\n#language-Danish #region-us \n", "# SkoleGPT Instruct \n\n\n\n\"skolegpt-instruct\" is an open source dataset for Danish instruction fine-tuning of LLM's. The dataset is translation of a quality filtered subset of the OpenOrca instruction dataset. The project is a part of the SkoleGPT project. Find the code used to generate the dataset here: URL", "## Dataset Details", "### Dataset Description\n\n\n- Curated by: Kasper Junge\n- Funded by: Københavns Professionshøjskole\n- Shared by: Københavns Professionshøjskole\n- Language(s) (NLP): Danish\n- License: MIT", "### Dataset Sources [optional]\n\n\n\n- Repository: URL", "## Dataset Creation", "### Source Data\n\nThe dataset is a translation of a filtered subset of (OpenOrca)[URL", "#### Data Collection and Processing", "##### Data Sampling\nThe data extraction process involves loading and shuffling the OpenOrca dataset, specifically the \"1M-GPT4-Augmented.parquet\" file. A specified number of entries are then selected to form a subset, which is organized into a DataFrame with an added \"source\" column for origin tracking. This results in a manageable and tailored subset of the dataset for analysis or further processing.", "##### Filtering\nThe filter_data function is designed to preprocess and filter the raw OpenOrca dataset. This process involves several steps, each targeting specific types of data or formatting issues within the dataset. \n\nBelow is an outline of these steps:\n\n1. Remove Already Translated Instructions: If translated examples already exists in the kobprof/skolegpt-instruct dataset on the Hugging Face hub, remove them from the dataset.\n\n2. Remove Translation Instructions: Filters out entries containing the word \"translate\" in the \"question\" field, targeting instances that are likely to be translation instructions.\n\n3. Remove Common Prefixes and Postfixes: Strips common prefixes and postfixes from the \"question\" field. This is achieved through regular expressions constructed from provided lists of common prefixes and postfixes.\n\n4. Remove Questions Ending with a Colon: Filters out entries where the \"question\" field ends with a colon, as these often indicate incomplete or improperly formatted questions.\n\n5. Remove Multiple Choice Questions: Identifies and removes multiple-choice questions. This is done using regular expressions to detect common multiple-choice question formats, such as options labeled with letters or numbers.\n\n6. Basic Cleaning: Performs basic cleaning of the dataset by stripping characters from the \"system_prompt\", \"question\", and \"response\" fields and removing entries where \"question\" or \"response\" fields are empty.\n\n7. Remove Exotic Characters: Filters out entries containing exotic characters in the \"question\" and \"response\" fields. The list of characters to filter is dynamically generated based on the dataset content.\n\n8. Remove Duplicate Questions and Responses: Eliminates duplicates in the dataset, ensuring uniqueness in both \"question\" and \"response\" fields.", "##### Translation\nThe dataset translation is carried out via the DeepL service. This process necessitates having a DeepL account with a linked credit card. DeepL provides a free tier, allowing access to their API for translating up to 500,000 characters, which can be found here. There are approximately 16 unique system prompts consistently used throughout all instructions. By translating only these unique system prompts instead of translating them for each row, we can significantly conserve character usage.", "##### Survey Instructions\nThe dataset also contains instructions collected from a survey carried output the SkoleGPT. Each instruction is paried with a GPT-4 response.", "## Dataset Card Authors\n- Kasper Junge" ]
f762c071298ffb46c006c4eab565958b1ab00650
# Dataset Card for "self-reward-dev1706112590" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vwxyzjn/self-reward-dev1706112590
[ "region:us" ]
2024-01-24T16:16:23+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "candidate_0", "dtype": "string"}, {"name": "score_0", "dtype": "float64"}, {"name": "candidate_1", "dtype": "string"}, {"name": "score_1", "dtype": "float64"}, {"name": "candidate_2", "dtype": "string"}, {"name": "score_2", "dtype": "float64"}, {"name": "candidate_3", "dtype": "string"}, {"name": "score_3", "dtype": "float64"}, {"name": "chosen", "dtype": "string"}, {"name": "chosen_score", "dtype": "float64"}, {"name": "chosen_idx", "dtype": "int64"}, {"name": "rejected", "dtype": "string"}, {"name": "rejected_score", "dtype": "float64"}, {"name": "rejected_idx", "dtype": "int64"}], "splits": [{"name": "train_sft_sft", "num_bytes": 60888, "num_examples": 4}, {"name": "test_sft_sft", "num_bytes": 51284, "num_examples": 4}, {"name": "train_gen_sft", "num_bytes": 87465, "num_examples": 4}, {"name": "test_gen_sft", "num_bytes": 74181, "num_examples": 4}], "download_size": 471439, "dataset_size": 273818}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft_sft", "path": "data/train_sft_sft-*"}, {"split": "test_sft_sft", "path": "data/test_sft_sft-*"}, {"split": "train_gen_sft", "path": "data/train_gen_sft-*"}, {"split": "test_gen_sft", "path": "data/test_gen_sft-*"}]}]}
2024-01-24T16:16:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "self-reward-dev1706112590" More Information needed
[ "# Dataset Card for \"self-reward-dev1706112590\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"self-reward-dev1706112590\"\n\nMore Information needed" ]
d64842708b2c85bb7c5544f184cc42a843b9ae9c
# Dataset Card for "aaaw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nailiamirzakhmedova/aaaw
[ "region:us" ]
2024-01-24T16:17:57+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "argument", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 160485, "num_examples": 80}], "download_size": 95990, "dataset_size": 160485}}
2024-01-24T16:17:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aaaw" More Information needed
[ "# Dataset Card for \"aaaw\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aaaw\"\n\nMore Information needed" ]
ebfbb904cb2890d79df667a9493c93dc26cfed1b
# Dataset Card for "dbnl.org-dutch-public-domain" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [DBNL Public Domain Collection](https://www.dbnl.org/letterkunde/pd/index.php) - **Point of Contact:** julian at vdgoltz.net ### Dataset Summary This dataset comprises a collection of texts from the Dutch Literature in the public domain, specifically from the DBNL (Digitale Bibliotheek voor de Nederlandse Letteren) public domain collection. The collection includes books, poems, songs, and other documentation, letters, etc., that are at least 140 years old and thus free of copyright restrictions. Each entry in the dataset corresponds to one section of a chapter of a text, ensuring a granular level of detail for text analysis. ### Supported Tasks and Leaderboards - Language Modeling - Text Generation - Other tasks that can benefit from historical Dutch texts ### Languages The dataset is primarily in Dutch (nl). ## Dataset Structure ### Data Instances A data instance corresponds to a section of a chapter of a document, including metadata such as title, author, publication year, and the text content itself. ### Data Fields - `ti_id`: Unique text identifier - `titel`: Title of the text - `jaar`: Publication year - `druk`: Edition - `bibliotheek`: Library code - `categorie`: Category ID - `pers_id`: Person ID - `voornaam`: Author's first name - `achternaam`: Author's last name - `url`: URL to the text - `text_url`: URL to the text in .txt format - `revision_date`: Date of the revision - `edition`: Edition details - `language`: Language of the text - `chapter`: Chapter number - `section`: Section number ### Data Splits The dataset is split into training and validation sets at text level (90:10), ensuring that sections or chapters from the same document do not leak from one split to another. ## Dataset Creation ### Curation Rationale The dataset was curated to make historical Dutch texts available for computational analysis, preserving cultural heritage and supporting research in the humanities and linguistic studies. ### Source Data #### Initial Data Collection and Normalization Data was collected from the DBNL's public domain collection, normalized, and structured to facilitate computational use. #### Who are the source language producers? The source language producers are authors of Dutch literature whose works have entered the public domain, implying their passing at least 70 years ago. ### Annotations The dataset does not contain annotations. ### Personal and Sensitive Information Given the historical nature of the texts, they are free from personal and sensitive information concerns in the contemporary sense. However, they reflect the social norms, biases, and cultural contexts of their time. ## Considerations for Using the Data ### Social Impact of Dataset The dataset serves as a valuable resource for understanding Dutch literary heritage, cultural history, and language evolution over time. It can support diverse research agendas in computational linguistics, cultural studies, and history. ### Discussion of Biases The texts contain biases prevalent at their time of publication, including colonialism, racism, sexism, and other societal norms of their era. Users are urged to consider these contexts critically and use the data responsibly. ### Other Known Limitations The dataset's historical nature means it may not be suitable for applications requiring contemporary language use or norms. ## Additional Information ### Dataset Curators This dataset was curated by https://huggingface.co/jvdgoltz, who is not affiliated with DBNL.org and did not act on their behalf. The data is sourced from the DBNL public domain collection. ### Licensing Information The texts in this dataset are in the public domain. According to Chat-GPT 4, the best fitting license would be: Creative Commons Zero v1.0 Universal, making them legally available for use, redistribution, and adaptation by anyone for any purpose. ### Citation Information Not applicable.
jvdgoltz/dbnl.org-dutch-public-domain
[ "task_categories:text-generation", "task_categories:fill-mask", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:nl", "license:cc0-1.0", "region:us" ]
2024-01-24T16:20:49+00:00
{"language": ["nl"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "fill-mask"], "dataset_info": {"features": [{"name": "meta", "struct": [{"name": "Unnamed: 28", "dtype": "string"}, {"name": "_jaar", "dtype": "int64"}, {"name": "achternaam", "dtype": "string"}, {"name": "bibliotheek", "dtype": "string"}, {"name": "categorie", "dtype": "int64"}, {"name": "chapter", "dtype": "int64"}, {"name": "druk", "dtype": "string"}, {"name": "edition", "dtype": "string"}, {"name": "geb_datum", "dtype": "string"}, {"name": "geb_land_code", "dtype": "string"}, {"name": "geb_plaats", "dtype": "string"}, {"name": "geb_plaats_code", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "jaar", "dtype": "string"}, {"name": "jaar_geboren", "dtype": "string"}, {"name": "jaar_overlijden", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "maand", "dtype": "string"}, {"name": "overl_datum", "dtype": "string"}, {"name": "overl_land_code", "dtype": "string"}, {"name": "overl_plaats", "dtype": "string"}, {"name": "overl_plaats_code", "dtype": "string"}, {"name": "pers_id", "dtype": "string"}, {"name": "ppn_o", "dtype": "string"}, {"name": "revision_date", "dtype": "string"}, {"name": "section", "dtype": "int64"}, {"name": "text_url", "dtype": "string"}, {"name": "ti_id", "dtype": "string"}, {"name": "titel", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "vols", "dtype": "string"}, {"name": "voornaam", "dtype": "string"}, {"name": "voorvoegsel", "dtype": "string"}, {"name": "vrouw", "dtype": "int64"}]}, {"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.parquet"}, {"split": "validation", "path": "validation.parquet"}]}]}
2024-02-09T12:59:59+00:00
[]
[ "nl" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #multilinguality-monolingual #size_categories-100K<n<1M #language-Dutch #license-cc0-1.0 #region-us
# Dataset Card for "URL-dutch-public-domain" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: DBNL Public Domain Collection - Point of Contact: julian at URL ### Dataset Summary This dataset comprises a collection of texts from the Dutch Literature in the public domain, specifically from the DBNL (Digitale Bibliotheek voor de Nederlandse Letteren) public domain collection. The collection includes books, poems, songs, and other documentation, letters, etc., that are at least 140 years old and thus free of copyright restrictions. Each entry in the dataset corresponds to one section of a chapter of a text, ensuring a granular level of detail for text analysis. ### Supported Tasks and Leaderboards - Language Modeling - Text Generation - Other tasks that can benefit from historical Dutch texts ### Languages The dataset is primarily in Dutch (nl). ## Dataset Structure ### Data Instances A data instance corresponds to a section of a chapter of a document, including metadata such as title, author, publication year, and the text content itself. ### Data Fields - 'ti_id': Unique text identifier - 'titel': Title of the text - 'jaar': Publication year - 'druk': Edition - 'bibliotheek': Library code - 'categorie': Category ID - 'pers_id': Person ID - 'voornaam': Author's first name - 'achternaam': Author's last name - 'url': URL to the text - 'text_url': URL to the text in .txt format - 'revision_date': Date of the revision - 'edition': Edition details - 'language': Language of the text - 'chapter': Chapter number - 'section': Section number ### Data Splits The dataset is split into training and validation sets at text level (90:10), ensuring that sections or chapters from the same document do not leak from one split to another. ## Dataset Creation ### Curation Rationale The dataset was curated to make historical Dutch texts available for computational analysis, preserving cultural heritage and supporting research in the humanities and linguistic studies. ### Source Data #### Initial Data Collection and Normalization Data was collected from the DBNL's public domain collection, normalized, and structured to facilitate computational use. #### Who are the source language producers? The source language producers are authors of Dutch literature whose works have entered the public domain, implying their passing at least 70 years ago. ### Annotations The dataset does not contain annotations. ### Personal and Sensitive Information Given the historical nature of the texts, they are free from personal and sensitive information concerns in the contemporary sense. However, they reflect the social norms, biases, and cultural contexts of their time. ## Considerations for Using the Data ### Social Impact of Dataset The dataset serves as a valuable resource for understanding Dutch literary heritage, cultural history, and language evolution over time. It can support diverse research agendas in computational linguistics, cultural studies, and history. ### Discussion of Biases The texts contain biases prevalent at their time of publication, including colonialism, racism, sexism, and other societal norms of their era. Users are urged to consider these contexts critically and use the data responsibly. ### Other Known Limitations The dataset's historical nature means it may not be suitable for applications requiring contemporary language use or norms. ## Additional Information ### Dataset Curators This dataset was curated by URL who is not affiliated with URL and did not act on their behalf. The data is sourced from the DBNL public domain collection. ### Licensing Information The texts in this dataset are in the public domain. According to Chat-GPT 4, the best fitting license would be: Creative Commons Zero v1.0 Universal, making them legally available for use, redistribution, and adaptation by anyone for any purpose. Not applicable.
[ "# Dataset Card for \"URL-dutch-public-domain\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: DBNL Public Domain Collection\n- Point of Contact: julian at URL", "### Dataset Summary\n\nThis dataset comprises a collection of texts from the Dutch Literature in the public domain, specifically from the DBNL (Digitale Bibliotheek voor de Nederlandse Letteren) public domain collection. The collection includes books, poems, songs, and other documentation, letters, etc., that are at least 140 years old and thus free of copyright restrictions. Each entry in the dataset corresponds to one section of a chapter of a text, ensuring a granular level of detail for text analysis.", "### Supported Tasks and Leaderboards\n\n- Language Modeling\n- Text Generation\n- Other tasks that can benefit from historical Dutch texts", "### Languages\n\nThe dataset is primarily in Dutch (nl).", "## Dataset Structure", "### Data Instances\n\nA data instance corresponds to a section of a chapter of a document, including metadata such as title, author, publication year, and the text content itself.", "### Data Fields\n\n- 'ti_id': Unique text identifier\n- 'titel': Title of the text\n- 'jaar': Publication year\n- 'druk': Edition\n- 'bibliotheek': Library code\n- 'categorie': Category ID\n- 'pers_id': Person ID\n- 'voornaam': Author's first name\n- 'achternaam': Author's last name\n- 'url': URL to the text\n- 'text_url': URL to the text in .txt format\n- 'revision_date': Date of the revision\n- 'edition': Edition details\n- 'language': Language of the text\n- 'chapter': Chapter number\n- 'section': Section number", "### Data Splits\n\nThe dataset is split into training and validation sets at text level (90:10), ensuring that sections or chapters from the same document do not leak from one split to another.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was curated to make historical Dutch texts available for computational analysis, preserving cultural heritage and supporting research in the humanities and linguistic studies.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was collected from the DBNL's public domain collection, normalized, and structured to facilitate computational use.", "#### Who are the source language producers?\n\nThe source language producers are authors of Dutch literature whose works have entered the public domain, implying their passing at least 70 years ago.", "### Annotations\n\nThe dataset does not contain annotations.", "### Personal and Sensitive Information\n\nGiven the historical nature of the texts, they are free from personal and sensitive information concerns in the contemporary sense. However, they reflect the social norms, biases, and cultural contexts of their time.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset serves as a valuable resource for understanding Dutch literary heritage, cultural history, and language evolution over time. It can support diverse research agendas in computational linguistics, cultural studies, and history.", "### Discussion of Biases\n\nThe texts contain biases prevalent at their time of publication, including colonialism, racism, sexism, and other societal norms of their era. Users are urged to consider these contexts critically and use the data responsibly.", "### Other Known Limitations\n\nThe dataset's historical nature means it may not be suitable for applications requiring contemporary language use or norms.", "## Additional Information", "### Dataset Curators\n\nThis dataset was curated by URL who is not affiliated with URL and did not act on their behalf. The data is sourced from the DBNL public domain collection.", "### Licensing Information\n\nThe texts in this dataset are in the public domain. According to Chat-GPT 4, the best fitting license would be: Creative Commons Zero v1.0 Universal, making them legally available for use, redistribution, and adaptation by anyone for any purpose.\n\n\n\nNot applicable." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #multilinguality-monolingual #size_categories-100K<n<1M #language-Dutch #license-cc0-1.0 #region-us \n", "# Dataset Card for \"URL-dutch-public-domain\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: DBNL Public Domain Collection\n- Point of Contact: julian at URL", "### Dataset Summary\n\nThis dataset comprises a collection of texts from the Dutch Literature in the public domain, specifically from the DBNL (Digitale Bibliotheek voor de Nederlandse Letteren) public domain collection. The collection includes books, poems, songs, and other documentation, letters, etc., that are at least 140 years old and thus free of copyright restrictions. Each entry in the dataset corresponds to one section of a chapter of a text, ensuring a granular level of detail for text analysis.", "### Supported Tasks and Leaderboards\n\n- Language Modeling\n- Text Generation\n- Other tasks that can benefit from historical Dutch texts", "### Languages\n\nThe dataset is primarily in Dutch (nl).", "## Dataset Structure", "### Data Instances\n\nA data instance corresponds to a section of a chapter of a document, including metadata such as title, author, publication year, and the text content itself.", "### Data Fields\n\n- 'ti_id': Unique text identifier\n- 'titel': Title of the text\n- 'jaar': Publication year\n- 'druk': Edition\n- 'bibliotheek': Library code\n- 'categorie': Category ID\n- 'pers_id': Person ID\n- 'voornaam': Author's first name\n- 'achternaam': Author's last name\n- 'url': URL to the text\n- 'text_url': URL to the text in .txt format\n- 'revision_date': Date of the revision\n- 'edition': Edition details\n- 'language': Language of the text\n- 'chapter': Chapter number\n- 'section': Section number", "### Data Splits\n\nThe dataset is split into training and validation sets at text level (90:10), ensuring that sections or chapters from the same document do not leak from one split to another.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was curated to make historical Dutch texts available for computational analysis, preserving cultural heritage and supporting research in the humanities and linguistic studies.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was collected from the DBNL's public domain collection, normalized, and structured to facilitate computational use.", "#### Who are the source language producers?\n\nThe source language producers are authors of Dutch literature whose works have entered the public domain, implying their passing at least 70 years ago.", "### Annotations\n\nThe dataset does not contain annotations.", "### Personal and Sensitive Information\n\nGiven the historical nature of the texts, they are free from personal and sensitive information concerns in the contemporary sense. However, they reflect the social norms, biases, and cultural contexts of their time.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset serves as a valuable resource for understanding Dutch literary heritage, cultural history, and language evolution over time. It can support diverse research agendas in computational linguistics, cultural studies, and history.", "### Discussion of Biases\n\nThe texts contain biases prevalent at their time of publication, including colonialism, racism, sexism, and other societal norms of their era. Users are urged to consider these contexts critically and use the data responsibly.", "### Other Known Limitations\n\nThe dataset's historical nature means it may not be suitable for applications requiring contemporary language use or norms.", "## Additional Information", "### Dataset Curators\n\nThis dataset was curated by URL who is not affiliated with URL and did not act on their behalf. The data is sourced from the DBNL public domain collection.", "### Licensing Information\n\nThe texts in this dataset are in the public domain. According to Chat-GPT 4, the best fitting license would be: Creative Commons Zero v1.0 Universal, making them legally available for use, redistribution, and adaptation by anyone for any purpose.\n\n\n\nNot applicable." ]
66451bab832496a3f15691b22ae02ac3c4f2a07d
This dataset is a scrape from the League of Legends wiki, which contains the most up-to-date version with 166 champions. The data consists of: champion name, champion icon URL, champion wiki URL, stats, biography, passive ability, ability 1, ability 2, ability 3, ability 4, and curiosities.
Arconte/league_of_legends_wiki_scrape
[ "size_categories:n<1K", "language:en", "license:mit", "region:us" ]
2024-01-24T16:34:20+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "pretty_name": "League of legends wiki scrape-166"}
2024-01-24T16:39:27+00:00
[]
[ "en" ]
TAGS #size_categories-n<1K #language-English #license-mit #region-us
This dataset is a scrape from the League of Legends wiki, which contains the most up-to-date version with 166 champions. The data consists of: champion name, champion icon URL, champion wiki URL, stats, biography, passive ability, ability 1, ability 2, ability 3, ability 4, and curiosities.
[]
[ "TAGS\n#size_categories-n<1K #language-English #license-mit #region-us \n" ]
494e681fa4e8c0f35beab6be83b9ca27ba49607c
# Dataset Sourced from: https://github.com/ltgoslo/NorQuAD @InProceedings{IvaAndJen2023, author = {Sardana Ivanova, Fredrik Aas Andreassen, Matias Jentoft, Sondre Wold and Lilja {\O}vrelid}, title = {NorQuAD: Norwegian Question Answering Dataset}, booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics}, year = {2023}, address = {Torshavn, Pharoe Islands}, }
eanderson/NorQuAD
[ "language:no", "license:mit", "region:us" ]
2024-01-24T16:37:35+00:00
{"language": ["no"], "license": "mit", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "train.json"}, {"split": "test", "path": "test.json"}, {"split": "val", "path": "val.json"}]}]}
2024-01-25T14:49:43+00:00
[]
[ "no" ]
TAGS #language-Norwegian #license-mit #region-us
# Dataset Sourced from: URL @InProceedings{IvaAndJen2023, author = {Sardana Ivanova, Fredrik Aas Andreassen, Matias Jentoft, Sondre Wold and Lilja {\O}vrelid}, title = {NorQuAD: Norwegian Question Answering Dataset}, booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics}, year = {2023}, address = {Torshavn, Pharoe Islands}, }
[ "# Dataset\n\nSourced from:\n\nURL\n\n@InProceedings{IvaAndJen2023,\n author = {Sardana Ivanova, Fredrik Aas Andreassen, Matias Jentoft, Sondre Wold and Lilja {\\O}vrelid},\n title = {NorQuAD: Norwegian Question Answering Dataset},\n booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics},\n year = {2023},\n address = {Torshavn, Pharoe Islands},\n}" ]
[ "TAGS\n#language-Norwegian #license-mit #region-us \n", "# Dataset\n\nSourced from:\n\nURL\n\n@InProceedings{IvaAndJen2023,\n author = {Sardana Ivanova, Fredrik Aas Andreassen, Matias Jentoft, Sondre Wold and Lilja {\\O}vrelid},\n title = {NorQuAD: Norwegian Question Answering Dataset},\n booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics},\n year = {2023},\n address = {Torshavn, Pharoe Islands},\n}" ]
cc9202f1a0288df6e01685c4090a12a6f71b967b
# Dataset Card for "c_x86_O0_exebench_numeric_2k_json_cleaned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhangshuoming/c_x86_O0_exebench_numeric_2k_json_cleaned
[ "region:us" ]
2024-01-24T16:42:54+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6048893.6625, "num_examples": 925}], "download_size": 348560, "dataset_size": 6048893.6625}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-24T16:52:37+00:00
[]
[]
TAGS #region-us
# Dataset Card for "c_x86_O0_exebench_numeric_2k_json_cleaned" More Information needed
[ "# Dataset Card for \"c_x86_O0_exebench_numeric_2k_json_cleaned\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"c_x86_O0_exebench_numeric_2k_json_cleaned\"\n\nMore Information needed" ]
916c84344ca667c00aa24e7c31cb3441b81d1646
# Blood-Brain Barrier Database (B3DB) The [Blood-Brain Barrier Database (B3DB)](https://www.nature.com/articles/s41597-021-01069-5) is a large benchmark dataset compiled from 50 published resources (as summarized at raw_data/raw_data_summary.tsv) and categorized based on the consistency between different experimental references/measurements. This dataset was [published in Scientific Data](https://www.nature.com/articles/s41597-021-01069-5) and a mirror of the [theochem/B3DB](https://github.com/theochem/B3DB) the official Github repo where it is occasionally uploaded with new experimental data. Scientists who would like to contribute data should contact the database's maintainers (e.g., by creating a [new Issue](https://github.com/theochem/B3DB/issues) in the database). A subset of the molecules in B3DB has numerical `logBB` values (1058 compounds), while the whole dataset has categorical (`BBB+` or `BBB-`) BBB permeability labels (7807 compounds). Some physicochemical properties of the molecules are also provided. ## Quickstart Usage ### Load a dataset in python Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library. First, from the command line install the `datasets` library $ pip install datasets then, from within python load the datasets library >>> import datasets and load one of the `B3DB` datasets, e.g., >>> B3DB_classification = datasets.load_dataset("maomlab/B3DB", name = "B3DB_classification") Downloading readme: 100%|████████████████████████| 4.40k/4.40k [00:00<00:00, 1.35MB/s] Downloading data: 100%|██████████████████████████| 680k/680k [00:00<00:00, 946kB/s] Downloading data: 100%|██████████████████████████| 2.11M/2.11M [00:01<00:00, 1.28MB/s] Generating test split: 100%|█████████████████████| 1951/1951 [00:00<00:00, 20854.95 examples/s] Generating train split: 100%|████████████████████| 5856/5856 [00:00<00:00, 144260.80 examples/s] and inspecting the loaded dataset >>> B3DB_classification B3DB_classification DatasetDict({ test: Dataset({ features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'], num_rows: 1951 }) train: Dataset({ features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'], num_rows: 5856 }) }) ### Use a dataset to train a model One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia. First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support pip install 'molflux[catboost,rdkit]' then load, featurize, split, fit, and evaluate the a catboost model import json from datasets import load_set from molflux.datasets import featurise_dataset from molflux.features import load_from_dicts as load_representations_from_dicts from molflux.splits import load_from_dict as load_split_from_dict from molflux.modelzoo import load_from_dict as load_model_from_dict from molflux.metrics import load_suite split_dataset = load_dataset('maomlab/B3DB', name = 'B3DB_classification') split_featurised_dataset = featurise_dataset( split_dataset, column = "SMILES", representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}])) model = load_model_from_dict({ "name": "cat_boost_classifier", "config": { "x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'], "y_features": ['BBB+/BBB-']}}) model.train(split_featurised_dataset["train"]) preds = model.predict(split_featurised_dataset["test"]) classification_suite = load_suite("classification") scores = classification_suite.compute( references=split_featurised_dataset["test"]['BBB+/BBB-'], predictions=preds["cat_boost_classifier::BBB+/BBB-"]) ## About the DB3B ### Features of *B3DB* 1. The largest dataset with numerical and categorical values for Blood-Brain Barrier small molecules (to the best of our knowledge, as of February 25, 2021). 2. Inclusion of stereochemistry information with isomeric SMILES with chiral specifications if available. Otherwise, canonical SMILES are used. 3. Characterization of uncertainty of experimental measurements by grouping the collected molecular data records. 4. Extended datasets for numerical and categorical data with precomputed physicochemical properties using [mordred](https://github.com/mordred-descriptor/mordred). ### Data splits The original B3DB dataset does not define splits, so here we have used the `Realistic Split` method described in [(Martin et al., 2018)](https://doi.org/10.1021/acs.jcim.7b00166). ### Citation Please use the following citation in any publication using our *B3DB* dataset: ```md @article{Meng_A_curated_diverse_2021, author = {Meng, Fanwang and Xi, Yang and Huang, Jinfeng and Ayers, Paul W.}, doi = {10.1038/s41597-021-01069-5}, journal = {Scientific Data}, number = {289}, title = {A curated diverse molecular database of blood-brain barrier permeability with chemical descriptors}, volume = {8}, year = {2021}, url = {https://www.nature.com/articles/s41597-021-01069-5}, publisher = {Springer Nature} } ```
maomlab/B3DB
[ "task_categories:tabular-classification", "task_categories:tabular-regression", "size_categories:1K<n<10K", "source_datasets:curated", "language:en", "license:cc0-1.0", "chemistry", "biology", "medical", "region:us" ]
2024-01-24T16:46:24+00:00
{"language": "en", "license": "cc0-1.0", "size_categories": ["1K<n<10K"], "source_datasets": "curated", "task_categories": ["tabular-classification", "tabular-regression"], "pretty_name": "Blood-Brain Barrier Database (B3DB)", "config_names": ["B3DB_classification", "B3DB_classification_extended", "B3DB_regression", "B3DB_regression_extended"], "tags": ["chemistry", "biology", "medical"], "dataset_summary": "Curation of 50 published resources of categorical and numeric measurements of Blood-Brain Barrier penetration.", "citation": "@article{\n Meng_A_curated_diverse_2021,\n author = {Meng, Fanwang and Xi, Yang and Huang, Jinfeng and Ayers, Paul W.},\n doi = {10.1038/s41597-021-01069-5},\n journal = {Scientific Data},\n number = {289},\n title = {A curated diverse molecular database of blood-brain barrier permeability with chemical descriptors},\n volume = {8},\n year = {2021},\n url = {https://www.nature.com/articles/s41597-021-01069-5},\n publisher = {Springer Nature}\n}", "configs": [{"config_name": "B3DB_classification", "data_files": [{"split": "test", "path": "B3DB_classification/test.csv"}, {"split": "train", "path": "B3DB_classification/train.csv"}]}, {"config_name": "B3DB_classification_extended", "data_files": [{"split": "test", "path": "B3DB_classification_extended/test.csv"}, {"split": "train", "path": "B3DB_classification_extended/train.csv"}]}, {"config_name": "B3DB_regression", "data_files": [{"split": "test", "path": "B3DB_regression/test.csv"}, {"split": "train", "path": "B3DB_regression/train.csv"}]}, {"config_name": "B3DB_regression_extended", "data_files": [{"split": "test", "path": "B3DB_regression_extended/test.csv"}, {"split": "train", "path": "B3DB_regression_extended/train.csv"}]}], "dataset_info": [{"config_name": "B3DB_classification", "features": [{"name": "NO.", "dtype": "int64"}, {"name": "compound_name", "dtype": "string"}, {"name": "IUPAC_name", "dtype": "string"}, {"name": "SMILES", "dtype": "string"}, {"name": "CID", "dtype": "float64"}, {"name": "logBB", "dtype": "float64"}, {"name": "BBB+/BBB-", "dtype": {"class_label": {"names": {"0": "BBB-", "1": "BBB+"}}}}, {"name": "Inchi", "dtype": "string"}, {"name": "threshold", "dtype": "float64"}, {"name": "reference", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "ClusterNo", "dtype": "int64"}, {"name": "MolCount", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 656000, "num_examples": 5856}, {"name": "test", "num_bytes": 218640, "num_examples": 1951}]}, {"config_name": "B3DB_classification_extended", "splits": [{"name": "train", "num_bytes": 76221824, "num_examples": 5856}, {"name": "test", "num_bytes": 25394344, "num_examples": 1951}]}, {"config_name": "B3DB_regression", "features": [{"name": "NO.", "dtype": "int64"}, {"name": "compound_name", "dtype": "string"}, {"name": "IUPAC_name", "dtype": "string"}, {"name": "SMILES", "dtype": "string"}, {"name": "CID", "dtype": "string"}, {"name": "logBB", "dtype": "float64"}, {"name": "Inchi", "dtype": "string"}, {"name": "reference", "dtype": "string"}, {"name": "smiles_result", "dtype": "string"}, {"name": "group", "dtype": "string"}, {"name": "comments", "dtype": "float64"}, {"name": "ClusterNo", "dtype": "int64"}, {"name": "MolCount", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 82808, "num_examples": 795}, {"name": "test", "num_bytes": 27480, "num_examples": 263}]}, {"config_name": "B3DB_regression_extended", "splits": [{"name": "train", "num_bytes": 10347848, "num_examples": 795}, {"name": "test", "num_bytes": 3423336, "num_examples": 263}]}]}
2024-02-13T14:08:22+00:00
[]
[ "en" ]
TAGS #task_categories-tabular-classification #task_categories-tabular-regression #size_categories-1K<n<10K #source_datasets-curated #language-English #license-cc0-1.0 #chemistry #biology #medical #region-us
# Blood-Brain Barrier Database (B3DB) The Blood-Brain Barrier Database (B3DB) is a large benchmark dataset compiled from 50 published resources (as summarized at raw_data/raw_data_summary.tsv) and categorized based on the consistency between different experimental references/measurements. This dataset was published in Scientific Data and a mirror of the theochem/B3DB the official Github repo where it is occasionally uploaded with new experimental data. Scientists who would like to contribute data should contact the database's maintainers (e.g., by creating a new Issue in the database). A subset of the molecules in B3DB has numerical 'logBB' values (1058 compounds), while the whole dataset has categorical ('BBB+' or 'BBB-') BBB permeability labels (7807 compounds). Some physicochemical properties of the molecules are also provided. ## Quickstart Usage ### Load a dataset in python Each subset can be loaded into python using the Huggingface datasets library. First, from the command line install the 'datasets' library $ pip install datasets then, from within python load the datasets library >>> import datasets and load one of the 'B3DB' datasets, e.g., >>> B3DB_classification = datasets.load_dataset("maomlab/B3DB", name = "B3DB_classification") Downloading readme: 100%|████████████████████████| 4.40k/4.40k [00:00<00:00, 1.35MB/s] Downloading data: 100%|██████████████████████████| 680k/680k [00:00<00:00, 946kB/s] Downloading data: 100%|██████████████████████████| 2.11M/2.11M [00:01<00:00, 1.28MB/s] Generating test split: 100%|█████████████████████| 1951/1951 [00:00<00:00, 20854.95 examples/s] Generating train split: 100%|████████████████████| 5856/5856 [00:00<00:00, 144260.80 examples/s] and inspecting the loaded dataset >>> B3DB_classification B3DB_classification DatasetDict({ test: Dataset({ features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'], num_rows: 1951 }) train: Dataset({ features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'], num_rows: 5856 }) }) ### Use a dataset to train a model One way to use the dataset is through the MolFlux package developed by Exscientia. First, from the command line, install 'MolFlux' library with 'catboost' and 'rdkit' support pip install 'molflux[catboost,rdkit]' then load, featurize, split, fit, and evaluate the a catboost model import json from datasets import load_set from molflux.datasets import featurise_dataset from molflux.features import load_from_dicts as load_representations_from_dicts from URL import load_from_dict as load_split_from_dict from molflux.modelzoo import load_from_dict as load_model_from_dict from molflux.metrics import load_suite split_dataset = load_dataset('maomlab/B3DB', name = 'B3DB_classification') split_featurised_dataset = featurise_dataset( split_dataset, column = "SMILES", representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}])) model = load_model_from_dict({ "name": "cat_boost_classifier", "config": { "x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'], "y_features": ['BBB+/BBB-']}}) URL(split_featurised_dataset["train"]) preds = model.predict(split_featurised_dataset["test"]) classification_suite = load_suite("classification") scores = classification_suite.compute( references=split_featurised_dataset["test"]['BBB+/BBB-'], predictions=preds["cat_boost_classifier::BBB+/BBB-"]) ## About the DB3B ### Features of *B3DB* 1. The largest dataset with numerical and categorical values for Blood-Brain Barrier small molecules (to the best of our knowledge, as of February 25, 2021). 2. Inclusion of stereochemistry information with isomeric SMILES with chiral specifications if available. Otherwise, canonical SMILES are used. 3. Characterization of uncertainty of experimental measurements by grouping the collected molecular data records. 4. Extended datasets for numerical and categorical data with precomputed physicochemical properties using mordred. ### Data splits The original B3DB dataset does not define splits, so here we have used the 'Realistic Split' method described in (Martin et al., 2018). Please use the following citation in any publication using our *B3DB* dataset:
[ "# Blood-Brain Barrier Database (B3DB)\n\nThe Blood-Brain Barrier Database (B3DB)\nis a large benchmark dataset compiled from 50 published resources\n(as summarized at raw_data/raw_data_summary.tsv) and categorized based on the consistency between\ndifferent experimental references/measurements. This dataset was\npublished in Scientific Data and\na mirror of the theochem/B3DB the official Github repo where it is\noccasionally uploaded with new experimental data. Scientists who would like\nto contribute data should contact the database's maintainers (e.g., by creating a\nnew Issue in the database).\n\nA subset of the molecules in B3DB has numerical 'logBB' values (1058 compounds),\nwhile the whole dataset has categorical ('BBB+' or 'BBB-') BBB permeability labels\n(7807 compounds). Some physicochemical properties of the molecules are also provided.", "## Quickstart Usage", "### Load a dataset in python\nEach subset can be loaded into python using the Huggingface datasets library.\nFirst, from the command line install the 'datasets' library\n\n $ pip install datasets\n\nthen, from within python load the datasets library\n\n >>> import datasets\n \nand load one of the 'B3DB' datasets, e.g.,\n\n >>> B3DB_classification = datasets.load_dataset(\"maomlab/B3DB\", name = \"B3DB_classification\")\n Downloading readme: 100%|████████████████████████| 4.40k/4.40k [00:00<00:00, 1.35MB/s]\n Downloading data: 100%|██████████████████████████| 680k/680k [00:00<00:00, 946kB/s]\n Downloading data: 100%|██████████████████████████| 2.11M/2.11M [00:01<00:00, 1.28MB/s]\n Generating test split: 100%|█████████████████████| 1951/1951 [00:00<00:00, 20854.95 examples/s]\n Generating train split: 100%|████████████████████| 5856/5856 [00:00<00:00, 144260.80 examples/s]\n\nand inspecting the loaded dataset\n\n >>> B3DB_classification\n B3DB_classification\n DatasetDict({\n test: Dataset({\n features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],\n num_rows: 1951\n })\n train: Dataset({\n features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],\n num_rows: 5856\n })\n })", "### Use a dataset to train a model\nOne way to use the dataset is through the MolFlux package developed by Exscientia.\nFirst, from the command line, install 'MolFlux' library with 'catboost' and 'rdkit' support\n\n pip install 'molflux[catboost,rdkit]'\n\nthen load, featurize, split, fit, and evaluate the a catboost model\n\n import json\n from datasets import load_set\n from molflux.datasets import featurise_dataset\n from molflux.features import load_from_dicts as load_representations_from_dicts\n from URL import load_from_dict as load_split_from_dict\n from molflux.modelzoo import load_from_dict as load_model_from_dict\n from molflux.metrics import load_suite\n \n split_dataset = load_dataset('maomlab/B3DB', name = 'B3DB_classification')\n \n split_featurised_dataset = featurise_dataset(\n split_dataset,\n column = \"SMILES\",\n representations = load_representations_from_dicts([{\"name\": \"morgan\"}, {\"name\": \"maccs_rdkit\"}]))\n\n model = load_model_from_dict({\n \"name\": \"cat_boost_classifier\",\n \"config\": {\n \"x_features\": ['SMILES::morgan', 'SMILES::maccs_rdkit'],\n \"y_features\": ['BBB+/BBB-']}})\n \n URL(split_featurised_dataset[\"train\"])\n preds = model.predict(split_featurised_dataset[\"test\"])\n \n classification_suite = load_suite(\"classification\")\n \n scores = classification_suite.compute(\n references=split_featurised_dataset[\"test\"]['BBB+/BBB-'],\n predictions=preds[\"cat_boost_classifier::BBB+/BBB-\"])", "## About the DB3B", "### Features of *B3DB*\n\n1. The largest dataset with numerical and categorical values for Blood-Brain Barrier small molecules\n (to the best of our knowledge, as of February 25, 2021).\n\n2. Inclusion of stereochemistry information with isomeric SMILES with chiral specifications if\n available. Otherwise, canonical SMILES are used.\n\n3. Characterization of uncertainty of experimental measurements by grouping the collected molecular\n data records.\n\n4. Extended datasets for numerical and categorical data with precomputed physicochemical properties\n using mordred.", "### Data splits\nThe original B3DB dataset does not define splits, so here we have used the 'Realistic Split' method described\nin (Martin et al., 2018).\n\nPlease use the following citation in any publication using our *B3DB* dataset:" ]
[ "TAGS\n#task_categories-tabular-classification #task_categories-tabular-regression #size_categories-1K<n<10K #source_datasets-curated #language-English #license-cc0-1.0 #chemistry #biology #medical #region-us \n", "# Blood-Brain Barrier Database (B3DB)\n\nThe Blood-Brain Barrier Database (B3DB)\nis a large benchmark dataset compiled from 50 published resources\n(as summarized at raw_data/raw_data_summary.tsv) and categorized based on the consistency between\ndifferent experimental references/measurements. This dataset was\npublished in Scientific Data and\na mirror of the theochem/B3DB the official Github repo where it is\noccasionally uploaded with new experimental data. Scientists who would like\nto contribute data should contact the database's maintainers (e.g., by creating a\nnew Issue in the database).\n\nA subset of the molecules in B3DB has numerical 'logBB' values (1058 compounds),\nwhile the whole dataset has categorical ('BBB+' or 'BBB-') BBB permeability labels\n(7807 compounds). Some physicochemical properties of the molecules are also provided.", "## Quickstart Usage", "### Load a dataset in python\nEach subset can be loaded into python using the Huggingface datasets library.\nFirst, from the command line install the 'datasets' library\n\n $ pip install datasets\n\nthen, from within python load the datasets library\n\n >>> import datasets\n \nand load one of the 'B3DB' datasets, e.g.,\n\n >>> B3DB_classification = datasets.load_dataset(\"maomlab/B3DB\", name = \"B3DB_classification\")\n Downloading readme: 100%|████████████████████████| 4.40k/4.40k [00:00<00:00, 1.35MB/s]\n Downloading data: 100%|██████████████████████████| 680k/680k [00:00<00:00, 946kB/s]\n Downloading data: 100%|██████████████████████████| 2.11M/2.11M [00:01<00:00, 1.28MB/s]\n Generating test split: 100%|█████████████████████| 1951/1951 [00:00<00:00, 20854.95 examples/s]\n Generating train split: 100%|████████████████████| 5856/5856 [00:00<00:00, 144260.80 examples/s]\n\nand inspecting the loaded dataset\n\n >>> B3DB_classification\n B3DB_classification\n DatasetDict({\n test: Dataset({\n features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],\n num_rows: 1951\n })\n train: Dataset({\n features: ['NO.', 'compound_name', 'IUPAC_name', 'SMILES', 'CID', 'logBB', 'BBB+/BBB-', 'Inchi', 'threshold', 'reference', 'group', 'comments', 'ClusterNo', 'MolCount'],\n num_rows: 5856\n })\n })", "### Use a dataset to train a model\nOne way to use the dataset is through the MolFlux package developed by Exscientia.\nFirst, from the command line, install 'MolFlux' library with 'catboost' and 'rdkit' support\n\n pip install 'molflux[catboost,rdkit]'\n\nthen load, featurize, split, fit, and evaluate the a catboost model\n\n import json\n from datasets import load_set\n from molflux.datasets import featurise_dataset\n from molflux.features import load_from_dicts as load_representations_from_dicts\n from URL import load_from_dict as load_split_from_dict\n from molflux.modelzoo import load_from_dict as load_model_from_dict\n from molflux.metrics import load_suite\n \n split_dataset = load_dataset('maomlab/B3DB', name = 'B3DB_classification')\n \n split_featurised_dataset = featurise_dataset(\n split_dataset,\n column = \"SMILES\",\n representations = load_representations_from_dicts([{\"name\": \"morgan\"}, {\"name\": \"maccs_rdkit\"}]))\n\n model = load_model_from_dict({\n \"name\": \"cat_boost_classifier\",\n \"config\": {\n \"x_features\": ['SMILES::morgan', 'SMILES::maccs_rdkit'],\n \"y_features\": ['BBB+/BBB-']}})\n \n URL(split_featurised_dataset[\"train\"])\n preds = model.predict(split_featurised_dataset[\"test\"])\n \n classification_suite = load_suite(\"classification\")\n \n scores = classification_suite.compute(\n references=split_featurised_dataset[\"test\"]['BBB+/BBB-'],\n predictions=preds[\"cat_boost_classifier::BBB+/BBB-\"])", "## About the DB3B", "### Features of *B3DB*\n\n1. The largest dataset with numerical and categorical values for Blood-Brain Barrier small molecules\n (to the best of our knowledge, as of February 25, 2021).\n\n2. Inclusion of stereochemistry information with isomeric SMILES with chiral specifications if\n available. Otherwise, canonical SMILES are used.\n\n3. Characterization of uncertainty of experimental measurements by grouping the collected molecular\n data records.\n\n4. Extended datasets for numerical and categorical data with precomputed physicochemical properties\n using mordred.", "### Data splits\nThe original B3DB dataset does not define splits, so here we have used the 'Realistic Split' method described\nin (Martin et al., 2018).\n\nPlease use the following citation in any publication using our *B3DB* dataset:" ]
7af7a1cdc7eff163db85354a5975714d2114f13a
# lilac/UltraChat-200k This dataset is a [Lilac](http://lilacml.com) processed dataset. Original dataset: [https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) To download the dataset to a local directory: ```bash lilac download lilacai/lilac-UltraChat-200k ``` or from python with: ```py ll.download("lilacai/lilac-UltraChat-200k") ```
lilacai/lilac-UltraChat-200k
[ "Lilac", "region:us" ]
2024-01-24T16:47:02+00:00
{"tags": ["Lilac"]}
2024-02-13T22:14:11+00:00
[]
[]
TAGS #Lilac #region-us
# lilac/UltraChat-200k This dataset is a Lilac processed dataset. Original dataset: URL To download the dataset to a local directory: or from python with:
[ "# lilac/UltraChat-200k\nThis dataset is a Lilac processed dataset. Original dataset: URL\n\nTo download the dataset to a local directory:\n\n\n\nor from python with:" ]
[ "TAGS\n#Lilac #region-us \n", "# lilac/UltraChat-200k\nThis dataset is a Lilac processed dataset. Original dataset: URL\n\nTo download the dataset to a local directory:\n\n\n\nor from python with:" ]
eaaa292b8ea5f087457af1800df1d32e9cc771df
Downloaded from [Supplemental Information](https://www.nature.com/articles/s41746-024-01010-1#Sec10) of the article "Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine " [[link](https://www.nature.com/articles/s41746-024-01010-1)] Savage, T., Nayak, A., Gallo, R. et al. Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine. npj Digit. Med. 7, 20 (2024). https://doi.org/10.1038/s41746-024-01010-1
katielink/nejm-medqa-diagnostic-reasoning-dataset
[ "license:cc-by-4.0", "medical", "region:us" ]
2024-01-24T17:04:48+00:00
{"license": "cc-by-4.0", "tags": ["medical"], "dataset_info": [{"config_name": "medqa_test", "features": [{"name": "index", "dtype": "float64"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "new_IR", "dtype": "string"}, {"name": "IR Grade", "dtype": "string"}, {"name": "new_AR", "dtype": "string"}, {"name": "AR Grade", "dtype": "string"}, {"name": "new_BR", "dtype": "string"}, {"name": "BR Grade", "dtype": "string"}, {"name": "new_DR", "dtype": "string"}, {"name": "DR Grade", "dtype": "string"}, {"name": "CoT", "dtype": "string"}, {"name": "CoT.1", "dtype": "string"}, {"name": "GPT-4->", "dtype": "string"}, {"name": "CoT.2", "dtype": "string"}, {"name": "Rationale", "dtype": "string"}, {"name": "CoT Grade", "dtype": "string"}, {"name": "DR ", "dtype": "string"}, {"name": "Rationale.1", "dtype": "string"}, {"name": "DR Grade.1", "dtype": "string"}, {"name": "IR", "dtype": "string"}, {"name": "Rationale.2", "dtype": "string"}, {"name": "IR Grade.1", "dtype": "string"}, {"name": "AR", "dtype": "string"}, {"name": "Rationale.3", "dtype": "string"}, {"name": "AR Grade.1", "dtype": "string"}, {"name": "BR", "dtype": "string"}, {"name": "Rationale.4", "dtype": "string"}, {"name": "BR Grade.1", "dtype": "string"}, {"name": "Unnamed: 29", "dtype": "float64"}, {"name": "Unnamed: 30", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2863244, "num_examples": 513}], "download_size": 1507249, "dataset_size": 2863244}, {"config_name": "medqa_train", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "diag_IR1", "dtype": "string"}, {"name": "grade (1 for y , 2 for n)", "dtype": "float64"}, {"name": "diag_AR1", "dtype": "string"}, {"name": "grade", "dtype": "int64"}, {"name": "diag_BR1", "dtype": "string"}, {"name": "grade.1", "dtype": "int64"}, {"name": "diag_DR1", "dtype": "string"}, {"name": "grade.2", "dtype": "int64"}, {"name": "diag_IR2", "dtype": "string"}, {"name": "grade.3", "dtype": "int64"}, {"name": "diag_AR2", "dtype": "string"}, {"name": "grade.4", "dtype": "int64"}, {"name": "diag_BR2", "dtype": "string"}, {"name": "grade.5", "dtype": "int64"}, {"name": "diag_DR2", "dtype": "string"}, {"name": "grade.6", "dtype": "int64"}, {"name": "diag_IR3", "dtype": "string"}, {"name": "grade.7", "dtype": "int64"}, {"name": "diag_AR3", "dtype": "string"}, {"name": "grade.8", "dtype": "int64"}, {"name": "diag_BR3", "dtype": "string"}, {"name": "grade.9", "dtype": "int64"}, {"name": "diag_DR3", "dtype": "string"}, {"name": "grade.10", "dtype": "int64"}, {"name": "diag_IR4", "dtype": "string"}, {"name": "grade.11", "dtype": "int64"}, {"name": "diag_AR4", "dtype": "string"}, {"name": "grade.12", "dtype": "int64"}, {"name": "diag_BR4", "dtype": "string"}, {"name": "grade.13", "dtype": "int64"}, {"name": "diag_DR4", "dtype": "string"}, {"name": "grade.14", "dtype": "int64"}, {"name": "diag_IR5", "dtype": "string"}, {"name": "grade.15", "dtype": "int64"}, {"name": "diag_AR5", "dtype": "string"}, {"name": "grade.16", "dtype": "int64"}, {"name": "diag_IR6", "dtype": "string"}, {"name": "grade.17", "dtype": "int64"}, {"name": "diag_AR6", "dtype": "string"}, {"name": "grade.18", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 140480, "num_examples": 96}], "download_size": 106580, "dataset_size": 140480}, {"config_name": "nejm_test", "features": [{"name": "DOI", "dtype": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Lab Data", "dtype": "string"}, {"name": "Answer", "dtype": "string"}, {"name": "Initial Answer", "dtype": "string"}, {"name": "BLINDED_1", "dtype": "string"}, {"name": "BLINDED_2", "dtype": "string"}, {"name": "BLINDED_1.1", "dtype": "string"}, {"name": "CoT_response", "dtype": "string"}, {"name": "CoT answer", "dtype": "string"}, {"name": "CoT Grade", "dtype": "string"}, {"name": "DR_response", "dtype": "string"}, {"name": "DR answer", "dtype": "string"}, {"name": "DR grade", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10643219, "num_examples": 301}], "download_size": 3610707, "dataset_size": 10643219}], "configs": [{"config_name": "medqa_test", "data_files": [{"split": "train", "path": "medqa_test/train-*"}]}, {"config_name": "medqa_train", "data_files": [{"split": "train", "path": "medqa_train/train-*"}]}, {"config_name": "nejm_test", "data_files": [{"split": "train", "path": "nejm_test/train-*"}]}]}
2024-01-25T01:25:40+00:00
[]
[]
TAGS #license-cc-by-4.0 #medical #region-us
Downloaded from Supplemental Information of the article "Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine " [link] Savage, T., Nayak, A., Gallo, R. et al. Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine. npj Digit. Med. 7, 20 (2024). URL
[]
[ "TAGS\n#license-cc-by-4.0 #medical #region-us \n" ]
b25ffbda9463824c2af9cce7b50487ce5df0fcb9
# Omni Sonus(All Speech) Dataset for speech related tasks Multilingual speech dataset for multiple tasks including: 1. Speech Recognition. 2. Speech Synthesis. 3. Speech Emotion Recognition. 4. Speech Classification. 5. Speaker Classification. 6. Keyword Spotting. 7. Implementing new ideas. ## Dataset Details Dataset Composition: Encompasses a vast collection of audio recordings featuring both male and female speakers. Each speaker contributes to the dataset across a range of emotions, ensuring diversity and comprehensiveness. Professional speakers were chosen to provide a polished and clear representation of spoken text. 1. Languages and Accents: Primarily focused on German and English accents in Version 1.0. Future iterations planned to include a multitude of languages, with a special emphasis on Asian accents (Pakistani, Indian, Chinese) and the inclusion of Urdu language. Aim to create a truly multilingual dataset to cater to a broader audience and enhance the model's adaptability. 2. Intention and Task Labeling: The dataset is labeled based on the intention of the speaker, providing valuable insights into customer emotions during various tasks. Intentions cover a spectrum of scenarios, including but not limited to customer service queries, informational requests, and emotional expressions. 3. Demographic Information: Includes demographic details such as age and gender for each speaker. Aims to capture a diverse representation of age groups and gender identities, contributing to a well-rounded and inclusive dataset. 4. Text Variation: Each text in the dataset is spoken multiple times, ensuring robustness and variability in the training data. This approach helps the model learn to recognize emotions and intentions across different instances of the same text. 5. Duration Range: Spans a range of durations for each audio clip, mimicking real-world scenarios where interactions can vary in length. Ensures that the model is adept at handling both short and extended conversational snippets. 6. Upcoming Enhancements: Future versions are planned to feature an expanded range of accents, including but not limited to Urdu, and additional Asian accents. Continuous updates to enrich the dataset and maintain its relevance in the ever-evolving landscape of language and communication. This dataset serves as a robust resource for training models to understand and respond to human emotions, intentions, and accents, making it a valuable asset for applications ranging from customer service to emotional AI interfaces. ### Dataset Description While the primary objective of this dataset lies in customer intention recognition, its versatility extends beyond the realm of customer service applications. This multilingual speech dataset holds immense potential for a diverse array of tasks, making it a valuable resource for various applications in the field of natural language processing. The dataset can be effectively utilized for tasks such as speech recognition, where the model can learn to transcribe spoken words accurately. Additionally, it is well-suited for speech synthesis, enabling the generation of natural-sounding and emotionally expressive synthetic speech. Speech emotion recognition benefits from the dataset's rich labeling of emotional states, contributing to the development of models that can discern and respond to human emotions effectively. Furthermore, the dataset supports speech classification and speaker classification tasks, offering a foundation for training models to identify distinct speakers or classify spoken content. It also facilitates keyword spotting, aiding in the identification of specific terms or phrases within spoken language. Lastly, the dataset provides a robust platform for implementing new ideas, encouraging innovation and exploration within the domain of multilingual speech processing. Its adaptability across multiple tasks makes it a valuable asset for researchers and developers seeking a comprehensive and diverse speech dataset. ### Dataset Sources [optional] For now, this dataset is available on huggingface only but we aim to introduce the following sources soon: - **Repository:** coming soon... - **Paper [optional]:** coming soon... - **Demo [optional]:** coming soon... ## Uses Below are simplified code snippets using the datasets library in Python to load and use the described omni-sonus dataset. For the sake of illustration, we assume that the dataset is available in the Hugging Face datasets hub. ## from datasets import load_dataset ## dataset = load_dataset("Hunzla/omnisonus") You can use all the methods provided by datasets library.Please refer to the following documentation: ## https://huggingface.co/docs/datasets/index And don't forget to update datasets library in case of errors. ## Dataset Structure Dataset primarily consistys of the following columns: 1. file_name => This is a unique identifier of each audio with the 14 characters each with a specific meaning. (i). First two digits represent an age of a speaker. (ii). Third character represents gender of a speaker.m for male and f for female. (iii). Next three characters from index 4 to 6 represent an emotion with following details: "ang" => angry, "bor" => bored, "dis" => disgusting, "anx" => anxiety/fear, "hap" => happy, "sad" => sadness, "neu" => neutral/normal (iv). Next 2 characters with index 7 and 8 togeather represent speaking language. You can see language code character at https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes (v). Finally last 6 characters from index 9 to 14 represent duration and unit of time measurement usually ms(milliseconds). Example: "35fboren1960ms" <= Here this file_name is representing a 35 years old female speaker that is bored and speaking english language. Additionally, the duration of of example audio is 1960 milliseconds. 2. audio => Representing an audio file.By default, on load_dataset("Hunzla/speech-commands-wav2vec2-960h") the resulting datasets will contain an audio column containing an audio array and sampling rate with default value 16000. 3. text => This is transcription of an audio file that is being said by a speaker in audio file. 4. intention => Hypothetical column for a basic classification task to classifiy either customer is interested or not, assuming an audio as a reponse by customer. 5. accent => This is reprecenting an accent of speaker. ## Terms and Conditions This dataset is provided with the explicit understanding that it is intended solely for lawful and ethical purposes. Any use of this dataset for illegal, malicious, or unethical activities is strictly prohibited. By accessing or utilizing Omni-Sonus, you agree to adhere to the following guidelines: 1. Legal Compliance: Omni-Sonus must not be used for any activities that violate local, national, or international laws. Users are expected to comply with all applicable regulations and statutes. 2. Ethical Use: The dataset should be employed in a manner consistent with ethical standards and principles. Avoid any application that could cause harm, discomfort, or infringement upon the rights and privacy of individuals. 3. Non-Discrimination: Ensure that the dataset is used without any form of discrimination, bias, or harm towards any individual or group based on factors such as race, gender, ethnicity, religion, or any other protected characteristics. 4. Privacy Protection: Do not use Omni-Sonus in a way that compromises the privacy and confidentiality of individuals. Be cautious and responsible in handling any personally identifiable information that may be present in the dataset. 5. Intellectual Property Rights: Respect and adhere to all intellectual property rights associated with the dataset. Unauthorized distribution, reproduction, or modification of the dataset is strictly prohibited. 6. Research and Educational Purposes: While Omni-Sonus can be used for research and educational purposes, such activities should align with ethical standards and contribute positively to the advancement of knowledge. 7. No Unlawful Activities: The dataset must not be utilized for any form of cybercrime, hacking, or other unlawful activities. Any attempt to compromise the integrity of systems or networks using Omni-Sonus is strictly forbidden. Violation of these terms may result in legal consequences and the termination of access to the dataset. Users are urged to exercise responsible and ethical behavior when using Omni-Sonus and contribute positively to the development of technology and knowledge. ## Dataset Card Authors [optional] - **Curated by:** Hunzla Usman & Syed Aun Zaidi. - **Funded by [optional]:** Abacus Consulting (pvt) ltd. - **Language(s) (NLP):** English (Multilingual speech(including Urdu) dataset will be released soon.) ## Dataset Card Contact Email: Syed Aun Zaidi => [email protected] Hunzla Usman => [email protected]
Hunzla/omnisonus
[ "region:us" ]
2024-01-24T17:09:12+00:00
{"dataset_info": {"features": [{"name": "file_name", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "intention", "dtype": "string"}, {"name": "accent", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6437781064.534813, "num_examples": 36468}, {"name": "test", "num_bytes": 804810899.2325933, "num_examples": 4559}, {"name": "validation", "num_bytes": 804810899.2325933, "num_examples": 4559}], "download_size": 8029293409, "dataset_size": 8047402863.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-25T09:16:32+00:00
[]
[]
TAGS #region-us
# Omni Sonus(All Speech) Dataset for speech related tasks Multilingual speech dataset for multiple tasks including: 1. Speech Recognition. 2. Speech Synthesis. 3. Speech Emotion Recognition. 4. Speech Classification. 5. Speaker Classification. 6. Keyword Spotting. 7. Implementing new ideas. ## Dataset Details Dataset Composition: Encompasses a vast collection of audio recordings featuring both male and female speakers. Each speaker contributes to the dataset across a range of emotions, ensuring diversity and comprehensiveness. Professional speakers were chosen to provide a polished and clear representation of spoken text. 1. Languages and Accents: Primarily focused on German and English accents in Version 1.0. Future iterations planned to include a multitude of languages, with a special emphasis on Asian accents (Pakistani, Indian, Chinese) and the inclusion of Urdu language. Aim to create a truly multilingual dataset to cater to a broader audience and enhance the model's adaptability. 2. Intention and Task Labeling: The dataset is labeled based on the intention of the speaker, providing valuable insights into customer emotions during various tasks. Intentions cover a spectrum of scenarios, including but not limited to customer service queries, informational requests, and emotional expressions. 3. Demographic Information: Includes demographic details such as age and gender for each speaker. Aims to capture a diverse representation of age groups and gender identities, contributing to a well-rounded and inclusive dataset. 4. Text Variation: Each text in the dataset is spoken multiple times, ensuring robustness and variability in the training data. This approach helps the model learn to recognize emotions and intentions across different instances of the same text. 5. Duration Range: Spans a range of durations for each audio clip, mimicking real-world scenarios where interactions can vary in length. Ensures that the model is adept at handling both short and extended conversational snippets. 6. Upcoming Enhancements: Future versions are planned to feature an expanded range of accents, including but not limited to Urdu, and additional Asian accents. Continuous updates to enrich the dataset and maintain its relevance in the ever-evolving landscape of language and communication. This dataset serves as a robust resource for training models to understand and respond to human emotions, intentions, and accents, making it a valuable asset for applications ranging from customer service to emotional AI interfaces. ### Dataset Description While the primary objective of this dataset lies in customer intention recognition, its versatility extends beyond the realm of customer service applications. This multilingual speech dataset holds immense potential for a diverse array of tasks, making it a valuable resource for various applications in the field of natural language processing. The dataset can be effectively utilized for tasks such as speech recognition, where the model can learn to transcribe spoken words accurately. Additionally, it is well-suited for speech synthesis, enabling the generation of natural-sounding and emotionally expressive synthetic speech. Speech emotion recognition benefits from the dataset's rich labeling of emotional states, contributing to the development of models that can discern and respond to human emotions effectively. Furthermore, the dataset supports speech classification and speaker classification tasks, offering a foundation for training models to identify distinct speakers or classify spoken content. It also facilitates keyword spotting, aiding in the identification of specific terms or phrases within spoken language. Lastly, the dataset provides a robust platform for implementing new ideas, encouraging innovation and exploration within the domain of multilingual speech processing. Its adaptability across multiple tasks makes it a valuable asset for researchers and developers seeking a comprehensive and diverse speech dataset. ### Dataset Sources [optional] For now, this dataset is available on huggingface only but we aim to introduce the following sources soon: - Repository: coming soon... - Paper [optional]: coming soon... - Demo [optional]: coming soon... ## Uses Below are simplified code snippets using the datasets library in Python to load and use the described omni-sonus dataset. For the sake of illustration, we assume that the dataset is available in the Hugging Face datasets hub. ## from datasets import load_dataset ## dataset = load_dataset("Hunzla/omnisonus") You can use all the methods provided by datasets library.Please refer to the following documentation: ## URL And don't forget to update datasets library in case of errors. ## Dataset Structure Dataset primarily consistys of the following columns: 1. file_name => This is a unique identifier of each audio with the 14 characters each with a specific meaning. (i). First two digits represent an age of a speaker. (ii). Third character represents gender of a speaker.m for male and f for female. (iii). Next three characters from index 4 to 6 represent an emotion with following details: "ang" => angry, "bor" => bored, "dis" => disgusting, "anx" => anxiety/fear, "hap" => happy, "sad" => sadness, "neu" => neutral/normal (iv). Next 2 characters with index 7 and 8 togeather represent speaking language. You can see language code character at URL (v). Finally last 6 characters from index 9 to 14 represent duration and unit of time measurement usually ms(milliseconds). Example: "35fboren1960ms" <= Here this file_name is representing a 35 years old female speaker that is bored and speaking english language. Additionally, the duration of of example audio is 1960 milliseconds. 2. audio => Representing an audio file.By default, on load_dataset("Hunzla/speech-commands-wav2vec2-960h") the resulting datasets will contain an audio column containing an audio array and sampling rate with default value 16000. 3. text => This is transcription of an audio file that is being said by a speaker in audio file. 4. intention => Hypothetical column for a basic classification task to classifiy either customer is interested or not, assuming an audio as a reponse by customer. 5. accent => This is reprecenting an accent of speaker. ## Terms and Conditions This dataset is provided with the explicit understanding that it is intended solely for lawful and ethical purposes. Any use of this dataset for illegal, malicious, or unethical activities is strictly prohibited. By accessing or utilizing Omni-Sonus, you agree to adhere to the following guidelines: 1. Legal Compliance: Omni-Sonus must not be used for any activities that violate local, national, or international laws. Users are expected to comply with all applicable regulations and statutes. 2. Ethical Use: The dataset should be employed in a manner consistent with ethical standards and principles. Avoid any application that could cause harm, discomfort, or infringement upon the rights and privacy of individuals. 3. Non-Discrimination: Ensure that the dataset is used without any form of discrimination, bias, or harm towards any individual or group based on factors such as race, gender, ethnicity, religion, or any other protected characteristics. 4. Privacy Protection: Do not use Omni-Sonus in a way that compromises the privacy and confidentiality of individuals. Be cautious and responsible in handling any personally identifiable information that may be present in the dataset. 5. Intellectual Property Rights: Respect and adhere to all intellectual property rights associated with the dataset. Unauthorized distribution, reproduction, or modification of the dataset is strictly prohibited. 6. Research and Educational Purposes: While Omni-Sonus can be used for research and educational purposes, such activities should align with ethical standards and contribute positively to the advancement of knowledge. 7. No Unlawful Activities: The dataset must not be utilized for any form of cybercrime, hacking, or other unlawful activities. Any attempt to compromise the integrity of systems or networks using Omni-Sonus is strictly forbidden. Violation of these terms may result in legal consequences and the termination of access to the dataset. Users are urged to exercise responsible and ethical behavior when using Omni-Sonus and contribute positively to the development of technology and knowledge. ## Dataset Card Authors [optional] - Curated by: Hunzla Usman & Syed Aun Zaidi. - Funded by [optional]: Abacus Consulting (pvt) ltd. - Language(s) (NLP): English (Multilingual speech(including Urdu) dataset will be released soon.) ## Dataset Card Contact Email: Syed Aun Zaidi => saunzaidi@URL Hunzla Usman => hunzlausman0000@URL
[ "# Omni Sonus(All Speech) Dataset for speech related tasks\n\nMultilingual speech dataset for multiple tasks including:\n 1. Speech Recognition.\n 2. Speech Synthesis.\n 3. Speech Emotion Recognition.\n 4. Speech Classification.\n 5. Speaker Classification.\n 6. Keyword Spotting.\n 7. Implementing new ideas.", "## Dataset Details\n Dataset Composition:\n \n Encompasses a vast collection of audio recordings featuring both male and female speakers.\n Each speaker contributes to the dataset across a range of emotions, ensuring diversity and comprehensiveness.\n Professional speakers were chosen to provide a polished and clear representation of spoken text.\n 1. Languages and Accents:\n Primarily focused on German and English accents in Version 1.0.\n Future iterations planned to include a multitude of languages, with a special emphasis on Asian accents (Pakistani, Indian, Chinese) and the inclusion of Urdu language.\n Aim to create a truly multilingual dataset to cater to a broader audience and enhance the model's adaptability.\n \n 2. Intention and Task Labeling:\n The dataset is labeled based on the intention of the speaker, providing valuable insights into customer emotions during various tasks.\n Intentions cover a spectrum of scenarios, including but not limited to customer service queries, informational requests, and emotional expressions.\n \n 3. Demographic Information:\n Includes demographic details such as age and gender for each speaker.\n Aims to capture a diverse representation of age groups and gender identities, contributing to a well-rounded and inclusive dataset.\n \n 4. Text Variation:\n Each text in the dataset is spoken multiple times, ensuring robustness and variability in the training data.\n This approach helps the model learn to recognize emotions and intentions across different instances of the same text.\n \n 5. Duration Range:\n Spans a range of durations for each audio clip, mimicking real-world scenarios where interactions can vary in length.\n Ensures that the model is adept at handling both short and extended conversational snippets.\n 6. Upcoming Enhancements:\n Future versions are planned to feature an expanded range of accents, including but not limited to Urdu, and additional Asian accents.\n Continuous updates to enrich the dataset and maintain its relevance in the ever-evolving landscape of language and communication.\n This dataset serves as a robust resource for training models to understand and respond to human emotions, intentions, and accents, making it a valuable asset for applications ranging from customer service to emotional AI interfaces.", "### Dataset Description\n\n While the primary objective of this dataset lies in customer intention recognition, its versatility extends beyond the realm of customer\n service applications.\n This multilingual speech dataset holds immense potential for a diverse array of tasks, making it a valuable resource for various \n applications in the field of natural language processing.\n The dataset can be effectively utilized for tasks such as speech recognition, where the model can learn to transcribe spoken words\n accurately.\n Additionally, it is well-suited for speech synthesis, enabling the generation of natural-sounding and emotionally expressive synthetic \n speech.\n Speech emotion recognition benefits from the dataset's rich labeling of emotional states, contributing to the development of models that \n can discern and respond to human emotions effectively.\n Furthermore, the dataset supports speech classification and speaker classification tasks, offering a foundation for training models to \n identify distinct speakers or classify spoken content.\n It also facilitates keyword spotting, aiding in the identification of specific terms or phrases within spoken language.\n Lastly, the dataset provides a robust platform for implementing new ideas, encouraging innovation and exploration within the domain of \n multilingual speech processing.\n Its adaptability across multiple tasks makes it a valuable asset for researchers and developers seeking a comprehensive and diverse speech \n dataset.", "### Dataset Sources [optional]\n\n For now, this dataset is available on huggingface only but we aim to introduce the following sources soon:\n - Repository: coming soon...\n \n - Paper [optional]: coming soon...\n \n - Demo [optional]: coming soon...", "## Uses\n Below are simplified code snippets using the datasets library in Python to load and use the described omni-sonus dataset.\n For the sake of illustration, we assume that the dataset is available in the Hugging Face datasets hub.\n ## from datasets import load_dataset\n ## dataset = load_dataset(\"Hunzla/omnisonus\")\n You can use all the methods provided by datasets library.Please refer to the following documentation:\n ## URL\n And don't forget to update datasets library in case of errors.", "## Dataset Structure\n\n\nDataset primarily consistys of the following columns:\n 1. file_name => This is a unique identifier of each audio with the 14 characters each with a specific meaning.\n (i). First two digits represent an age of a speaker.\n (ii). Third character represents gender of a speaker.m for male and f for female.\n (iii). Next three characters from index 4 to 6 represent an emotion with following details:\n \"ang\" => angry,\n \"bor\" => bored,\n \"dis\" => disgusting,\n \"anx\" => anxiety/fear,\n \"hap\" => happy,\n \"sad\" => sadness,\n \"neu\" => neutral/normal\n (iv). Next 2 characters with index 7 and 8 togeather represent speaking language.\n You can see language code character at URL\n (v). Finally last 6 characters from index 9 to 14 represent duration and unit of time measurement usually ms(milliseconds).\n Example: \"35fboren1960ms\" <= Here this file_name is representing a 35 years old female speaker that is bored and speaking english language.\n Additionally, the duration of of example audio is 1960 milliseconds.\n 2. audio => Representing an audio file.By default, on load_dataset(\"Hunzla/speech-commands-wav2vec2-960h\") the resulting datasets will contain\n an audio column containing an audio array and sampling rate with default value 16000.\n 3. text => This is transcription of an audio file that is being said by a speaker in audio file.\n 4. intention => Hypothetical column for a basic classification task to classifiy either customer is interested or not, assuming an audio\n as a reponse by customer.\n 5. accent => This is reprecenting an accent of speaker.", "## Terms and Conditions\n\n This dataset is provided with the explicit understanding that it is intended solely for lawful and ethical purposes. Any use of this dataset for illegal, malicious, or unethical activities is strictly prohibited. By accessing or utilizing Omni-Sonus, you agree to adhere to the following guidelines:\n 1. Legal Compliance:\n Omni-Sonus must not be used for any activities that violate local, national, or international laws. Users are expected to comply with all applicable regulations and statutes.\n 2. Ethical Use:\n The dataset should be employed in a manner consistent with ethical standards and principles. Avoid any application that could cause harm, discomfort, or infringement upon the rights and privacy of individuals.\n 3. Non-Discrimination:\n Ensure that the dataset is used without any form of discrimination, bias, or harm towards any individual or group based on factors such as race, gender, ethnicity, religion, or any other protected characteristics.\n 4. Privacy Protection:\n Do not use Omni-Sonus in a way that compromises the privacy and confidentiality of individuals. Be cautious and responsible in handling any personally identifiable information that may be present in the dataset.\n 5. Intellectual Property Rights:\n Respect and adhere to all intellectual property rights associated with the dataset. Unauthorized distribution, reproduction, or modification of the dataset is strictly prohibited.\n 6. Research and Educational Purposes:\n While Omni-Sonus can be used for research and educational purposes, such activities should align with ethical standards and contribute positively to the advancement of knowledge.\n 7. No Unlawful Activities:\n The dataset must not be utilized for any form of cybercrime, hacking, or other unlawful activities. Any attempt to compromise the integrity of systems or networks using Omni-Sonus is strictly forbidden. \n\n Violation of these terms may result in legal consequences and the termination of access to the dataset. Users are urged to exercise responsible and ethical behavior when using Omni-Sonus and contribute positively to the development of technology and knowledge.", "## Dataset Card Authors [optional]\n\n - Curated by: Hunzla Usman & Syed Aun Zaidi.\n - Funded by [optional]: Abacus Consulting (pvt) ltd.\n - Language(s) (NLP): English (Multilingual speech(including Urdu) dataset will be released soon.)", "## Dataset Card Contact\n Email:\n Syed Aun Zaidi => saunzaidi@URL\n Hunzla Usman => hunzlausman0000@URL" ]
[ "TAGS\n#region-us \n", "# Omni Sonus(All Speech) Dataset for speech related tasks\n\nMultilingual speech dataset for multiple tasks including:\n 1. Speech Recognition.\n 2. Speech Synthesis.\n 3. Speech Emotion Recognition.\n 4. Speech Classification.\n 5. Speaker Classification.\n 6. Keyword Spotting.\n 7. Implementing new ideas.", "## Dataset Details\n Dataset Composition:\n \n Encompasses a vast collection of audio recordings featuring both male and female speakers.\n Each speaker contributes to the dataset across a range of emotions, ensuring diversity and comprehensiveness.\n Professional speakers were chosen to provide a polished and clear representation of spoken text.\n 1. Languages and Accents:\n Primarily focused on German and English accents in Version 1.0.\n Future iterations planned to include a multitude of languages, with a special emphasis on Asian accents (Pakistani, Indian, Chinese) and the inclusion of Urdu language.\n Aim to create a truly multilingual dataset to cater to a broader audience and enhance the model's adaptability.\n \n 2. Intention and Task Labeling:\n The dataset is labeled based on the intention of the speaker, providing valuable insights into customer emotions during various tasks.\n Intentions cover a spectrum of scenarios, including but not limited to customer service queries, informational requests, and emotional expressions.\n \n 3. Demographic Information:\n Includes demographic details such as age and gender for each speaker.\n Aims to capture a diverse representation of age groups and gender identities, contributing to a well-rounded and inclusive dataset.\n \n 4. Text Variation:\n Each text in the dataset is spoken multiple times, ensuring robustness and variability in the training data.\n This approach helps the model learn to recognize emotions and intentions across different instances of the same text.\n \n 5. Duration Range:\n Spans a range of durations for each audio clip, mimicking real-world scenarios where interactions can vary in length.\n Ensures that the model is adept at handling both short and extended conversational snippets.\n 6. Upcoming Enhancements:\n Future versions are planned to feature an expanded range of accents, including but not limited to Urdu, and additional Asian accents.\n Continuous updates to enrich the dataset and maintain its relevance in the ever-evolving landscape of language and communication.\n This dataset serves as a robust resource for training models to understand and respond to human emotions, intentions, and accents, making it a valuable asset for applications ranging from customer service to emotional AI interfaces.", "### Dataset Description\n\n While the primary objective of this dataset lies in customer intention recognition, its versatility extends beyond the realm of customer\n service applications.\n This multilingual speech dataset holds immense potential for a diverse array of tasks, making it a valuable resource for various \n applications in the field of natural language processing.\n The dataset can be effectively utilized for tasks such as speech recognition, where the model can learn to transcribe spoken words\n accurately.\n Additionally, it is well-suited for speech synthesis, enabling the generation of natural-sounding and emotionally expressive synthetic \n speech.\n Speech emotion recognition benefits from the dataset's rich labeling of emotional states, contributing to the development of models that \n can discern and respond to human emotions effectively.\n Furthermore, the dataset supports speech classification and speaker classification tasks, offering a foundation for training models to \n identify distinct speakers or classify spoken content.\n It also facilitates keyword spotting, aiding in the identification of specific terms or phrases within spoken language.\n Lastly, the dataset provides a robust platform for implementing new ideas, encouraging innovation and exploration within the domain of \n multilingual speech processing.\n Its adaptability across multiple tasks makes it a valuable asset for researchers and developers seeking a comprehensive and diverse speech \n dataset.", "### Dataset Sources [optional]\n\n For now, this dataset is available on huggingface only but we aim to introduce the following sources soon:\n - Repository: coming soon...\n \n - Paper [optional]: coming soon...\n \n - Demo [optional]: coming soon...", "## Uses\n Below are simplified code snippets using the datasets library in Python to load and use the described omni-sonus dataset.\n For the sake of illustration, we assume that the dataset is available in the Hugging Face datasets hub.\n ## from datasets import load_dataset\n ## dataset = load_dataset(\"Hunzla/omnisonus\")\n You can use all the methods provided by datasets library.Please refer to the following documentation:\n ## URL\n And don't forget to update datasets library in case of errors.", "## Dataset Structure\n\n\nDataset primarily consistys of the following columns:\n 1. file_name => This is a unique identifier of each audio with the 14 characters each with a specific meaning.\n (i). First two digits represent an age of a speaker.\n (ii). Third character represents gender of a speaker.m for male and f for female.\n (iii). Next three characters from index 4 to 6 represent an emotion with following details:\n \"ang\" => angry,\n \"bor\" => bored,\n \"dis\" => disgusting,\n \"anx\" => anxiety/fear,\n \"hap\" => happy,\n \"sad\" => sadness,\n \"neu\" => neutral/normal\n (iv). Next 2 characters with index 7 and 8 togeather represent speaking language.\n You can see language code character at URL\n (v). Finally last 6 characters from index 9 to 14 represent duration and unit of time measurement usually ms(milliseconds).\n Example: \"35fboren1960ms\" <= Here this file_name is representing a 35 years old female speaker that is bored and speaking english language.\n Additionally, the duration of of example audio is 1960 milliseconds.\n 2. audio => Representing an audio file.By default, on load_dataset(\"Hunzla/speech-commands-wav2vec2-960h\") the resulting datasets will contain\n an audio column containing an audio array and sampling rate with default value 16000.\n 3. text => This is transcription of an audio file that is being said by a speaker in audio file.\n 4. intention => Hypothetical column for a basic classification task to classifiy either customer is interested or not, assuming an audio\n as a reponse by customer.\n 5. accent => This is reprecenting an accent of speaker.", "## Terms and Conditions\n\n This dataset is provided with the explicit understanding that it is intended solely for lawful and ethical purposes. Any use of this dataset for illegal, malicious, or unethical activities is strictly prohibited. By accessing or utilizing Omni-Sonus, you agree to adhere to the following guidelines:\n 1. Legal Compliance:\n Omni-Sonus must not be used for any activities that violate local, national, or international laws. Users are expected to comply with all applicable regulations and statutes.\n 2. Ethical Use:\n The dataset should be employed in a manner consistent with ethical standards and principles. Avoid any application that could cause harm, discomfort, or infringement upon the rights and privacy of individuals.\n 3. Non-Discrimination:\n Ensure that the dataset is used without any form of discrimination, bias, or harm towards any individual or group based on factors such as race, gender, ethnicity, religion, or any other protected characteristics.\n 4. Privacy Protection:\n Do not use Omni-Sonus in a way that compromises the privacy and confidentiality of individuals. Be cautious and responsible in handling any personally identifiable information that may be present in the dataset.\n 5. Intellectual Property Rights:\n Respect and adhere to all intellectual property rights associated with the dataset. Unauthorized distribution, reproduction, or modification of the dataset is strictly prohibited.\n 6. Research and Educational Purposes:\n While Omni-Sonus can be used for research and educational purposes, such activities should align with ethical standards and contribute positively to the advancement of knowledge.\n 7. No Unlawful Activities:\n The dataset must not be utilized for any form of cybercrime, hacking, or other unlawful activities. Any attempt to compromise the integrity of systems or networks using Omni-Sonus is strictly forbidden. \n\n Violation of these terms may result in legal consequences and the termination of access to the dataset. Users are urged to exercise responsible and ethical behavior when using Omni-Sonus and contribute positively to the development of technology and knowledge.", "## Dataset Card Authors [optional]\n\n - Curated by: Hunzla Usman & Syed Aun Zaidi.\n - Funded by [optional]: Abacus Consulting (pvt) ltd.\n - Language(s) (NLP): English (Multilingual speech(including Urdu) dataset will be released soon.)", "## Dataset Card Contact\n Email:\n Syed Aun Zaidi => saunzaidi@URL\n Hunzla Usman => hunzlausman0000@URL" ]
a15565d5656b9af5abeb3805bd54dd0b28cd9f1f
# MidJoyrney V5-6 MidJoyrney V5-6 is a large-scale dataset scrape from official Discord server MidJoyrney. All text description and images generated by users who use MidJoyrney. ## Dataset Structure ```bash $ tree ├── MidJoyrney V5-6 │ ├── readme.md │ ├── MidJoyrneyV5.csv │ ├── MidJoyrneyV6.csv ``` ## How to use ```python from datasets import load_dataset V6 = load_dataset("WiNE-iNEFF/MidJoyrneyV5-6", "V6") V5 = load_dataset("WiNE-iNEFF/MidJoyrneyV5-6", "V5") ``` ## Dataset Details - **Curated by:** [Artsem Holub (WiNE iNEFF)](https://huggingface.co/WiNE-iNEFF) - **Language(s) (NLP):** English - **License:** mit ### Dataset Sources - **Site** [MidJoyrney](https://www.midjourney.com/) - **Discord:** [MidJoyrney](https://discord.com/invite/midjourney)
WiNE-iNEFF/MidJoyrneyV5-6
[ "task_categories:text-to-image", "task_categories:image-to-text", "size_categories:1K<n<10K", "language:en", "license:mit", "art", "text2image", "img2text", "image generation", "midjoyrney", "midjoyrneyV5", "midjoyrneyv6", "region:us" ]
2024-01-24T17:19:00+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image", "image-to-text"], "pretty_name": "MidJoyrneyV5-6", "tags": ["art", "text2image", "img2text", "image generation", "midjoyrney", "midjoyrneyV5", "midjoyrneyv6"], "dataset_info": {"features": [{"name": "channel_id", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "url", "dtype": "image"}, {"name": "content", "dtype": "string"}]}, "configs": [{"config_name": "V6", "data_files": "MidJoyrneyV6.csv", "default": true}, {"config_name": "V5", "data_files": "MidJoyrneyV5.csv"}]}
2024-01-26T16:14:43+00:00
[]
[ "en" ]
TAGS #task_categories-text-to-image #task_categories-image-to-text #size_categories-1K<n<10K #language-English #license-mit #art #text2image #img2text #image generation #midjoyrney #midjoyrneyV5 #midjoyrneyv6 #region-us
# MidJoyrney V5-6 MidJoyrney V5-6 is a large-scale dataset scrape from official Discord server MidJoyrney. All text description and images generated by users who use MidJoyrney. ## Dataset Structure ## How to use ## Dataset Details - Curated by: Artsem Holub (WiNE iNEFF) - Language(s) (NLP): English - License: mit ### Dataset Sources - Site MidJoyrney - Discord: MidJoyrney
[ "# MidJoyrney V5-6\n\nMidJoyrney V5-6 is a large-scale dataset scrape from official Discord server MidJoyrney. All text description and images generated by users who use MidJoyrney.", "## Dataset Structure", "## How to use", "## Dataset Details\n\n- Curated by: Artsem Holub (WiNE iNEFF)\n- Language(s) (NLP): English\n- License: mit", "### Dataset Sources\n\n- Site MidJoyrney\n- Discord: MidJoyrney" ]
[ "TAGS\n#task_categories-text-to-image #task_categories-image-to-text #size_categories-1K<n<10K #language-English #license-mit #art #text2image #img2text #image generation #midjoyrney #midjoyrneyV5 #midjoyrneyv6 #region-us \n", "# MidJoyrney V5-6\n\nMidJoyrney V5-6 is a large-scale dataset scrape from official Discord server MidJoyrney. All text description and images generated by users who use MidJoyrney.", "## Dataset Structure", "## How to use", "## Dataset Details\n\n- Curated by: Artsem Holub (WiNE iNEFF)\n- Language(s) (NLP): English\n- License: mit", "### Dataset Sources\n\n- Site MidJoyrney\n- Discord: MidJoyrney" ]
e0d83d3b39833d6e7f083bd0aa1ca3fcdd549f73
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
kaosrrr/kaos-me
[ "region:us" ]
2024-01-24T17:23:41+00:00
{}
2024-01-24T17:24:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
98c294b3caaa94170e77282f1587ea20c3d59784
# DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection This is an **unofficial** HuggingFace upload of the DiverseVul dataset from "[DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection](https://dl.acm.org/doi/10.1145/3607199.3607242)". Usage: ```python from datasets import load_dataset dataset = load_dataset("claudios/DiverseVul") ``` *** ## Citation ``` @article{Chen2023DiverseVulAN, title={DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection}, author={Yizheng Chen and Zhoujie Ding and Lamya Alowain and Xinyun Chen and David A. Wagner}, journal={Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses}, year={2023}, url={https://dl.acm.org/doi/10.1145/3607199.3607242} } ```
claudios/DiverseVul
[ "region:us" ]
2024-01-24T17:41:37+00:00
{"arxiv": 2304.00409, "dataset_info": {"features": [{"name": "func", "dtype": "string"}, {"name": "target", "dtype": "int64"}, {"name": "cwe", "sequence": "string"}, {"name": "project", "dtype": "string"}, {"name": "commit_id", "dtype": "string"}, {"name": "hash", "dtype": "float64"}, {"name": "size", "dtype": "int64"}, {"name": "message", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 670935965, "num_examples": 330492}], "download_size": 170891799, "dataset_size": 670935965}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2024-01-30T18:30:49+00:00
[]
[]
TAGS #region-us
# DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection This is an unofficial HuggingFace upload of the DiverseVul dataset from "DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection". Usage: *
[ "# DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection\nThis is an unofficial HuggingFace upload of the DiverseVul dataset from \"DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection\".\n\nUsage:\n\n*" ]
[ "TAGS\n#region-us \n", "# DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection\nThis is an unofficial HuggingFace upload of the DiverseVul dataset from \"DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection\".\n\nUsage:\n\n*" ]
1b264e60fa55da5ed130d7da2a845b77b862a81c
# NijiJoyrney V5 NijiJoyrney V5 is a large-scale dataset scrape from official Discord server NijiJoyrney. All text description and images generated by users who use NijiJoyrney. ## Dataset Structure ```bash $ tree ├── NijiJoyrneyV5 │ ├── readme.md │ ├── NijiJoyrneyV5_English.csv │ ├── NijiJoyrneyV5_Korean.csv │ ├── NijiJoyrneyV5_Japanese.csv │ ├── NijiJoyrneyV5_Chinese.csv ``` ## How to use ```python from datasets import load_dataset Niji_Eng = load_dataset("WiNE-iNEFF/NijiJoyrneyV5", "English") Niji_Kor = load_dataset("WiNE-iNEFF/NijiJoyrneyV5", "Korean") Niji_Jap = load_dataset("WiNE-iNEFF/NijiJoyrneyV5", "Japanese") Niji_Chi = load_dataset("WiNE-iNEFF/NijiJoyrneyV5", "Chinese") ``` ## Dataset Details - **Curated by:** [Artsem Holub (WiNE iNEFF)](https://huggingface.co/WiNE-iNEFF) - **Language(s) (NLP):** English, Korean, Japanese, Chinese - **License:** mit ### Dataset Sources - **Site** [NijiJoyrney](https://nijijourney.com/en/getting-started/) - **Discord:** [NijiJoyrney](https://discord.com/invite/nijijourney)
WiNE-iNEFF/NijiJoyrneyV5
[ "task_categories:text-to-image", "task_categories:image-to-text", "size_categories:1K<n<10K", "language:en", "language:ko", "language:ja", "language:zh", "license:mit", "art", "text2image", "img2text", "image generation", "nijijoyrney", "nijijoyrneyV5", "region:us" ]
2024-01-24T17:51:44+00:00
{"language": ["en", "ko", "ja", "zh"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image", "image-to-text"], "pretty_name": "NijiJoyrneyV5", "tags": ["art", "text2image", "img2text", "image generation", "nijijoyrney", "nijijoyrneyV5"], "dataset_info": {"features": [{"name": "channel_id", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "url", "dtype": "image"}, {"name": "content", "dtype": "string"}]}, "configs": [{"config_name": "English", "data_files": "NijiJoyrneyV5_English.csv", "default": true}, {"config_name": "Korean", "data_files": "NijiJoyrneyV5_Korean.csv"}, {"config_name": "Japanese", "data_files": "NijiJoyrneyV5_Japenese.csv"}, {"config_name": "Chinese", "data_files": "NijiJoyrneyV5_Chinese.csv"}]}
2024-01-24T18:11:05+00:00
[]
[ "en", "ko", "ja", "zh" ]
TAGS #task_categories-text-to-image #task_categories-image-to-text #size_categories-1K<n<10K #language-English #language-Korean #language-Japanese #language-Chinese #license-mit #art #text2image #img2text #image generation #nijijoyrney #nijijoyrneyV5 #region-us
# NijiJoyrney V5 NijiJoyrney V5 is a large-scale dataset scrape from official Discord server NijiJoyrney. All text description and images generated by users who use NijiJoyrney. ## Dataset Structure ## How to use ## Dataset Details - Curated by: Artsem Holub (WiNE iNEFF) - Language(s) (NLP): English, Korean, Japanese, Chinese - License: mit ### Dataset Sources - Site NijiJoyrney - Discord: NijiJoyrney
[ "# NijiJoyrney V5\n\nNijiJoyrney V5 is a large-scale dataset scrape from official Discord server NijiJoyrney. All text description and images generated by users who use NijiJoyrney.", "## Dataset Structure", "## How to use", "## Dataset Details\n\n- Curated by: Artsem Holub (WiNE iNEFF)\n- Language(s) (NLP): English, Korean, Japanese, Chinese\n- License: mit", "### Dataset Sources\n\n- Site NijiJoyrney\n- Discord: NijiJoyrney" ]
[ "TAGS\n#task_categories-text-to-image #task_categories-image-to-text #size_categories-1K<n<10K #language-English #language-Korean #language-Japanese #language-Chinese #license-mit #art #text2image #img2text #image generation #nijijoyrney #nijijoyrneyV5 #region-us \n", "# NijiJoyrney V5\n\nNijiJoyrney V5 is a large-scale dataset scrape from official Discord server NijiJoyrney. All text description and images generated by users who use NijiJoyrney.", "## Dataset Structure", "## How to use", "## Dataset Details\n\n- Curated by: Artsem Holub (WiNE iNEFF)\n- Language(s) (NLP): English, Korean, Japanese, Chinese\n- License: mit", "### Dataset Sources\n\n- Site NijiJoyrney\n- Discord: NijiJoyrney" ]
7c9d31d6d02fc8f6ff92fec641e5a8bc5e0cda9d
# Dataset Card for "scrolls_gov_report_preprocessed" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
regisss/scrolls_gov_report_preprocessed_mlperf
[ "region:us" ]
2024-01-24T17:55:31+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 550940448, "num_examples": 5604}, {"name": "validation", "num_bytes": 23889816, "num_examples": 243}], "download_size": 111686734, "dataset_size": 574830264}}
2024-01-24T17:55:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "scrolls_gov_report_preprocessed" More Information needed
[ "# Dataset Card for \"scrolls_gov_report_preprocessed\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"scrolls_gov_report_preprocessed\"\n\nMore Information needed" ]
60406d74791f1d57f141dc9fe402245c9a348a4b
# Dataset Card for "c_x86_O0_exebench_numeric_1k_json_cleaned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhangshuoming/c_x86_O0_exebench_numeric_1k_json_cleaned
[ "region:us" ]
2024-01-24T17:59:58+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2427004.944, "num_examples": 507}], "download_size": 190990, "dataset_size": 2427004.944}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-24T18:00:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "c_x86_O0_exebench_numeric_1k_json_cleaned" More Information needed
[ "# Dataset Card for \"c_x86_O0_exebench_numeric_1k_json_cleaned\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"c_x86_O0_exebench_numeric_1k_json_cleaned\"\n\nMore Information needed" ]
e197e03ba94934e0958bd473e9bffcca1f162f54
# Dataset Card for "cc-100-01-final" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bowphs/cc-100-01-final
[ "region:us" ]
2024-01-24T18:22:28+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2336802540, "num_examples": 285115}], "download_size": 586074202, "dataset_size": 2336802540}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-24T18:23:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cc-100-01-final" More Information needed
[ "# Dataset Card for \"cc-100-01-final\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cc-100-01-final\"\n\nMore Information needed" ]
96a9dbb7107ccdc82f9e905acbcb6e487475839c
# Dataset Card for "Gold-alpaca-legal-small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bcijo/Gold-alpaca-legal-small
[ "region:us" ]
2024-01-24T18:52:11+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "Instruction", "dtype": "string"}, {"name": "Input", "dtype": "string"}, {"name": "Output", "dtype": "string"}, {"name": "Text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45414, "num_examples": 5}], "download_size": 59733, "dataset_size": 45414}}
2024-01-24T18:52:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Gold-alpaca-legal-small" More Information needed
[ "# Dataset Card for \"Gold-alpaca-legal-small\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Gold-alpaca-legal-small\"\n\nMore Information needed" ]
bbaf10c669bc950e2b0b8022fdae5c8259278dc4
# Dataset Card for "ICEM" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lamaeldo/ICEM
[ "region:us" ]
2024-01-24T18:55:03+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 9981, "num_examples": 104}], "download_size": 4245, "dataset_size": 9981}}
2024-01-24T18:55:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ICEM" More Information needed
[ "# Dataset Card for \"ICEM\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ICEM\"\n\nMore Information needed" ]
a63844bef7cde1fca080f3914ca20f12acf9f46e
# Probabilistic Fairness Demonstration This folder contains the files necessary to reproduce the results from academic papers demonstrating the accuracy of probabilistic fairness under different sample sizes. Probabilistic fairness is a technique unique to jurity that allows users to calculate fairness metrics when protected status is unknown but a surrogate class feature is available. A <i>surrogate</i> class divides the population into groups where the probability of protected class membership is known given the surrogate class membership. Probabilistic fairness, its accuracy, and the simulation method used in these demonstrations are detailed in <a href="https://doi.org/10.1007/978-3-031-44505-7_29">" Surrogate Membership for Inferred Metrics in Fairness Evaluation"</a> ## simulation.py Demonstrates the accuracy of the probabilistic fairness method, showing that the method gives values that are close to the oracle metrics that would be calculated if protected status were known. ## simulation_compare_to_model.py One alternative method for calculating fairness metrics when protected status is unknown is to build a predictive model for protected status, assign individuals to groups based on model results, and then calculate fairness metrics as if protected status were known. This script demonstrates that fairness metrics calculated in this way are biased, where the degree of the bias is based on the PPV (positive predictive value/precision) and NPV (negative predictive value) of the model for protected status. ## simulation_counts.py The performance of probabilistic fairness metrics is related to the number of individuals per surrogate class and the number of surrogate classes available. This simulation examines performance under different sample size scenarios. ## Citation If you use this analysis in an article, please cite as: ``` @inproceedings{DBLP:conf/lion/ThielbarKZPD23, author = {Melinda Thielbar and Serdar Kadioglu and Chenhui Zhang and Rick Pack and Lukas Dannull}, editor = {Meinolf Sellmann and Kevin Tierney}, title = {Surrogate Membership for Inferred Metrics in Fairness Evaluation}, booktitle = {Learning and Intelligent Optimization - 17th International Conference, {LION} 17, Nice, France, June 4-8, 2023, Revised Selected Papers}, series = {Lecture Notes in Computer Science}, volume = {14286}, pages = {424--442}, publisher = {Springer}, year = {2023}, url = {https://doi.org/10.1007/978-3-031-44505-7_29}, doi = {10.1007/978-3-031-44505-7\_29}, timestamp = {Thu, 09 Nov 2023 21:13:04 +0100}, biburl = {https://dblp.org/rec/conf/lion/ThielbarKZPD23.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
mfthielb/probabilistic_fairness
[ "region:us" ]
2024-01-24T19:07:26+00:00
{}
2024-01-24T20:22:10+00:00
[]
[]
TAGS #region-us
# Probabilistic Fairness Demonstration This folder contains the files necessary to reproduce the results from academic papers demonstrating the accuracy of probabilistic fairness under different sample sizes. Probabilistic fairness is a technique unique to jurity that allows users to calculate fairness metrics when protected status is unknown but a surrogate class feature is available. A <i>surrogate</i> class divides the population into groups where the probability of protected class membership is known given the surrogate class membership. Probabilistic fairness, its accuracy, and the simulation method used in these demonstrations are detailed in <a href="URL Surrogate Membership for Inferred Metrics in Fairness Evaluation"</a> ## URL Demonstrates the accuracy of the probabilistic fairness method, showing that the method gives values that are close to the oracle metrics that would be calculated if protected status were known. ## simulation_compare_to_model.py One alternative method for calculating fairness metrics when protected status is unknown is to build a predictive model for protected status, assign individuals to groups based on model results, and then calculate fairness metrics as if protected status were known. This script demonstrates that fairness metrics calculated in this way are biased, where the degree of the bias is based on the PPV (positive predictive value/precision) and NPV (negative predictive value) of the model for protected status. ## simulation_counts.py The performance of probabilistic fairness metrics is related to the number of individuals per surrogate class and the number of surrogate classes available. This simulation examines performance under different sample size scenarios. If you use this analysis in an article, please cite as:
[ "# Probabilistic Fairness Demonstration\nThis folder contains the files necessary to reproduce the results from \nacademic papers demonstrating the accuracy of probabilistic \nfairness under different sample sizes. Probabilistic fairness is a technique \nunique to jurity that allows users to calculate fairness metrics when protected status is \nunknown but a surrogate class feature is available. A <i>surrogate</i> class divides \nthe population into groups where \nthe probability of protected class membership is known given the surrogate class membership.\n\nProbabilistic fairness, its accuracy, and the simulation method used in\nthese demonstrations are detailed in \n<a href=\"URL\nSurrogate Membership for Inferred Metrics in Fairness Evaluation\"</a>", "## URL\nDemonstrates the accuracy of the probabilistic fairness method, \nshowing that the method gives values that are close to the oracle metrics\nthat would be calculated if protected status were known.", "## simulation_compare_to_model.py\nOne alternative method for calculating fairness metrics when protected \nstatus is unknown is to build a predictive model for protected status, assign \nindividuals to groups based on model results, and then calculate fairness \nmetrics as if protected status were known. This script demonstrates that\nfairness metrics calculated in this way are biased, where the degree of the \nbias is based on the PPV (positive predictive value/precision) and NPV\n(negative predictive value) of the model for protected status.", "## simulation_counts.py\nThe performance of probabilistic fairness metrics is related to the number of \nindividuals per surrogate class and the number of surrogate classes available. \nThis simulation examines performance under different sample size scenarios. \n\nIf you use this analysis in an article, please cite as:" ]
[ "TAGS\n#region-us \n", "# Probabilistic Fairness Demonstration\nThis folder contains the files necessary to reproduce the results from \nacademic papers demonstrating the accuracy of probabilistic \nfairness under different sample sizes. Probabilistic fairness is a technique \nunique to jurity that allows users to calculate fairness metrics when protected status is \nunknown but a surrogate class feature is available. A <i>surrogate</i> class divides \nthe population into groups where \nthe probability of protected class membership is known given the surrogate class membership.\n\nProbabilistic fairness, its accuracy, and the simulation method used in\nthese demonstrations are detailed in \n<a href=\"URL\nSurrogate Membership for Inferred Metrics in Fairness Evaluation\"</a>", "## URL\nDemonstrates the accuracy of the probabilistic fairness method, \nshowing that the method gives values that are close to the oracle metrics\nthat would be calculated if protected status were known.", "## simulation_compare_to_model.py\nOne alternative method for calculating fairness metrics when protected \nstatus is unknown is to build a predictive model for protected status, assign \nindividuals to groups based on model results, and then calculate fairness \nmetrics as if protected status were known. This script demonstrates that\nfairness metrics calculated in this way are biased, where the degree of the \nbias is based on the PPV (positive predictive value/precision) and NPV\n(negative predictive value) of the model for protected status.", "## simulation_counts.py\nThe performance of probabilistic fairness metrics is related to the number of \nindividuals per surrogate class and the number of surrogate classes available. \nThis simulation examines performance under different sample size scenarios. \n\nIf you use this analysis in an article, please cite as:" ]
a8b9b91504a7307e297fd5fb7c9782f95395ef0f
This dataset was generated by reformatting [`coref-data/davis_pdp_raw`](https://huggingface.co/datasets/coref-data/davis_pdp_raw) into the indiscrim coreference format. See that repo for dataset details. See [ianporada/coref-data](https://github.com/ianporada/coref-data) for additional conversion details and the conversion script. Please create an issue in the repo above or in this dataset repo for any questions.
coref-data/davis_pdp_indiscrim
[ "region:us" ]
2024-01-24T20:00:34+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "end_char", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "speaker", "dtype": "null"}, {"name": "start_char", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "deprel", "dtype": "string"}, {"name": "end_char", "dtype": "int64"}, {"name": "feats", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "start_char", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_chains", "sequence": {"sequence": {"sequence": "int64"}}}, {"name": "genre", "dtype": "string"}, {"name": "meta_data", "struct": [{"name": "comment", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "test", "num_bytes": 249013, "num_examples": 60}], "download_size": 64159, "dataset_size": 249013}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2024-01-24T21:12:07+00:00
[]
[]
TAGS #region-us
This dataset was generated by reformatting 'coref-data/davis_pdp_raw' into the indiscrim coreference format. See that repo for dataset details. See ianporada/coref-data for additional conversion details and the conversion script. Please create an issue in the repo above or in this dataset repo for any questions.
[]
[ "TAGS\n#region-us \n" ]
499b6179ed90bf36d4eb138f5e0c045d5d656fbb
# lilac/roblox_luau_corpus This dataset is a [Lilac](http://lilacml.com) processed dataset. Original dataset: [https://huggingface.co/datasets/Roblox/luau_corpus](https://huggingface.co/datasets/Roblox/luau_corpus) To download the dataset to a local directory: ```bash lilac download lilacai/lilac-roblox_luau_corpus ``` or from python with: ```py ll.download("lilacai/lilac-roblox_luau_corpus") ```
lilacai/lilac-roblox_luau_corpus
[ "Lilac", "region:us" ]
2024-01-24T20:25:06+00:00
{"tags": ["Lilac"]}
2024-01-26T14:57:53+00:00
[]
[]
TAGS #Lilac #region-us
# lilac/roblox_luau_corpus This dataset is a Lilac processed dataset. Original dataset: URL To download the dataset to a local directory: or from python with:
[ "# lilac/roblox_luau_corpus\nThis dataset is a Lilac processed dataset. Original dataset: URL\n\nTo download the dataset to a local directory:\n\n\n\nor from python with:" ]
[ "TAGS\n#Lilac #region-us \n", "# lilac/roblox_luau_corpus\nThis dataset is a Lilac processed dataset. Original dataset: URL\n\nTo download the dataset to a local directory:\n\n\n\nor from python with:" ]
b6aaff9a9503df11a9b73280f409828fd457f334
# Dataset Card for "c_x86_O0_exebench_numeric_full_json_cleaned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhangshuoming/c_x86_O0_exebench_numeric_full_json_cleaned
[ "region:us" ]
2024-01-24T21:46:59+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55607410.377, "num_examples": 9353}], "download_size": 3420578, "dataset_size": 55607410.377}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-24T21:47:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "c_x86_O0_exebench_numeric_full_json_cleaned" More Information needed
[ "# Dataset Card for \"c_x86_O0_exebench_numeric_full_json_cleaned\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"c_x86_O0_exebench_numeric_full_json_cleaned\"\n\nMore Information needed" ]
76d73ae9a6e5075f4a26a797c7dead6d4634a01c
# Dataset Card for Evaluation run of Josephgflowers/Cinder-1.3B-Test <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Josephgflowers/Cinder-1.3B-Test](https://huggingface.co/Josephgflowers/Cinder-1.3B-Test) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Josephgflowers__Cinder-1.3B-Test", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T21:51:49.916130](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Cinder-1.3B-Test/blob/main/results_2024-01-24T21-51-49.916130.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.26857494600586107, "acc_stderr": 0.031129838187826393, "acc_norm": 0.26979883818482703, "acc_norm_stderr": 0.03187851296093977, "mc1": 0.2350061199510404, "mc1_stderr": 0.014843061507731618, "mc2": 0.36618235005901484, "mc2_stderr": 0.014414717099766074 }, "harness|arc:challenge|25": { "acc": 0.30887372013651876, "acc_stderr": 0.013501770929344004, "acc_norm": 0.3319112627986348, "acc_norm_stderr": 0.013760988200880536 }, "harness|hellaswag|10": { "acc": 0.4251145190201155, "acc_stderr": 0.004933500261683597, "acc_norm": 0.5547699661422028, "acc_norm_stderr": 0.004959754882055467 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.0446196043338474, "acc_norm": 0.27, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.2, "acc_stderr": 0.034554737023254366, "acc_norm": 0.2, "acc_norm_stderr": 0.034554737023254366 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.21710526315789475, "acc_stderr": 0.03355045304882921, "acc_norm": 0.21710526315789475, "acc_norm_stderr": 0.03355045304882921 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2490566037735849, "acc_stderr": 0.0266164829805017, "acc_norm": 0.2490566037735849, "acc_norm_stderr": 0.0266164829805017 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.20833333333333334, "acc_stderr": 0.033961162058453336, "acc_norm": 0.20833333333333334, "acc_norm_stderr": 0.033961162058453336 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.16, "acc_stderr": 0.0368452949177471, "acc_norm": 0.16, "acc_norm_stderr": 0.0368452949177471 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.28901734104046245, "acc_stderr": 0.03456425745087001, "acc_norm": 0.28901734104046245, "acc_norm_stderr": 0.03456425745087001 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.20588235294117646, "acc_stderr": 0.04023382273617748, "acc_norm": 0.20588235294117646, "acc_norm_stderr": 0.04023382273617748 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.27, "acc_stderr": 0.044619604333847394, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.225531914893617, "acc_stderr": 0.027321078417387536, "acc_norm": 0.225531914893617, "acc_norm_stderr": 0.027321078417387536 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.22807017543859648, "acc_stderr": 0.03947152782669415, "acc_norm": 0.22807017543859648, "acc_norm_stderr": 0.03947152782669415 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2689655172413793, "acc_stderr": 0.036951833116502325, "acc_norm": 0.2689655172413793, "acc_norm_stderr": 0.036951833116502325 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24603174603174602, "acc_stderr": 0.022182037202948368, "acc_norm": 0.24603174603174602, "acc_norm_stderr": 0.022182037202948368 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.19047619047619047, "acc_stderr": 0.03512207412302054, "acc_norm": 0.19047619047619047, "acc_norm_stderr": 0.03512207412302054 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.18, "acc_stderr": 0.03861229196653694, "acc_norm": 0.18, "acc_norm_stderr": 0.03861229196653694 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.26129032258064516, "acc_stderr": 0.024993053397764822, "acc_norm": 0.26129032258064516, "acc_norm_stderr": 0.024993053397764822 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.27586206896551724, "acc_stderr": 0.03144712581678242, "acc_norm": 0.27586206896551724, "acc_norm_stderr": 0.03144712581678242 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.2727272727272727, "acc_stderr": 0.0347769116216366, "acc_norm": 0.2727272727272727, "acc_norm_stderr": 0.0347769116216366 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.18686868686868688, "acc_stderr": 0.02777253333421899, "acc_norm": 0.18686868686868688, "acc_norm_stderr": 0.02777253333421899 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.36787564766839376, "acc_stderr": 0.034801756684660366, "acc_norm": 0.36787564766839376, "acc_norm_stderr": 0.034801756684660366 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.3435897435897436, "acc_stderr": 0.024078696580635474, "acc_norm": 0.3435897435897436, "acc_norm_stderr": 0.024078696580635474 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2740740740740741, "acc_stderr": 0.027195934804085622, "acc_norm": 0.2740740740740741, "acc_norm_stderr": 0.027195934804085622 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.2689075630252101, "acc_stderr": 0.028801392193631276, "acc_norm": 0.2689075630252101, "acc_norm_stderr": 0.028801392193631276 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.304635761589404, "acc_stderr": 0.03757949922943343, "acc_norm": 0.304635761589404, "acc_norm_stderr": 0.03757949922943343 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.21651376146788992, "acc_stderr": 0.01765871059444313, "acc_norm": 0.21651376146788992, "acc_norm_stderr": 0.01765871059444313 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4444444444444444, "acc_stderr": 0.03388857118502325, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.03388857118502325 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.2647058823529412, "acc_stderr": 0.03096451792692341, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.03096451792692341 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.2616033755274262, "acc_stderr": 0.028609516716994934, "acc_norm": 0.2616033755274262, "acc_norm_stderr": 0.028609516716994934 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.3452914798206278, "acc_stderr": 0.03191100192835794, "acc_norm": 0.3452914798206278, "acc_norm_stderr": 0.03191100192835794 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.24427480916030533, "acc_stderr": 0.03768335959728742, "acc_norm": 0.24427480916030533, "acc_norm_stderr": 0.03768335959728742 }, "harness|hendrycksTest-international_law|5": { "acc": 0.23140495867768596, "acc_stderr": 0.03849856098794089, "acc_norm": 0.23140495867768596, "acc_norm_stderr": 0.03849856098794089 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.24074074074074073, "acc_stderr": 0.04133119440243839, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.04133119440243839 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.294478527607362, "acc_stderr": 0.03581165790474082, "acc_norm": 0.294478527607362, "acc_norm_stderr": 0.03581165790474082 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.22321428571428573, "acc_stderr": 0.039523019677025116, "acc_norm": 0.22321428571428573, "acc_norm_stderr": 0.039523019677025116 }, "harness|hendrycksTest-management|5": { "acc": 0.22330097087378642, "acc_stderr": 0.04123553189891431, "acc_norm": 0.22330097087378642, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.3034188034188034, "acc_stderr": 0.030118210106942635, "acc_norm": 0.3034188034188034, "acc_norm_stderr": 0.030118210106942635 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.26947637292464877, "acc_stderr": 0.015866243073215047, "acc_norm": 0.26947637292464877, "acc_norm_stderr": 0.015866243073215047 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.2861271676300578, "acc_stderr": 0.02433214677913413, "acc_norm": 0.2861271676300578, "acc_norm_stderr": 0.02433214677913413 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23910614525139665, "acc_stderr": 0.01426555419233114, "acc_norm": 0.23910614525139665, "acc_norm_stderr": 0.01426555419233114 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.238562091503268, "acc_stderr": 0.02440439492808787, "acc_norm": 0.238562091503268, "acc_norm_stderr": 0.02440439492808787 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.3054662379421222, "acc_stderr": 0.02616058445014049, "acc_norm": 0.3054662379421222, "acc_norm_stderr": 0.02616058445014049 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.2345679012345679, "acc_stderr": 0.023576881744005726, "acc_norm": 0.2345679012345679, "acc_norm_stderr": 0.023576881744005726 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2695035460992908, "acc_stderr": 0.026469036818590617, "acc_norm": 0.2695035460992908, "acc_norm_stderr": 0.026469036818590617 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.24511082138200782, "acc_stderr": 0.010986307870045503, "acc_norm": 0.24511082138200782, "acc_norm_stderr": 0.010986307870045503 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.3897058823529412, "acc_stderr": 0.02962466358115969, "acc_norm": 0.3897058823529412, "acc_norm_stderr": 0.02962466358115969 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.24509803921568626, "acc_stderr": 0.017401816711427653, "acc_norm": 0.24509803921568626, "acc_norm_stderr": 0.017401816711427653 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2, "acc_stderr": 0.038313051408846034, "acc_norm": 0.2, "acc_norm_stderr": 0.038313051408846034 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.2816326530612245, "acc_stderr": 0.02879518557429128, "acc_norm": 0.2816326530612245, "acc_norm_stderr": 0.02879518557429128 }, "harness|hendrycksTest-sociology|5": { "acc": 0.2835820895522388, "acc_stderr": 0.03187187537919797, "acc_norm": 0.2835820895522388, "acc_norm_stderr": 0.03187187537919797 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-virology|5": { "acc": 0.24096385542168675, "acc_stderr": 0.0332939411907353, "acc_norm": 0.24096385542168675, "acc_norm_stderr": 0.0332939411907353 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.27485380116959063, "acc_stderr": 0.034240429246915824, "acc_norm": 0.27485380116959063, "acc_norm_stderr": 0.034240429246915824 }, "harness|truthfulqa:mc|0": { "mc1": 0.2350061199510404, "mc1_stderr": 0.014843061507731618, "mc2": 0.36618235005901484, "mc2_stderr": 0.014414717099766074 }, "harness|winogrande|5": { "acc": 0.5895816890292028, "acc_stderr": 0.013825107120035863 }, "harness|gsm8k|5": { "acc": 0.028051554207733132, "acc_stderr": 0.004548229533836359 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Josephgflowers__Cinder-1.3B-Test
[ "region:us" ]
2024-01-24T21:53:40+00:00
{"pretty_name": "Evaluation run of Josephgflowers/Cinder-1.3B-Test", "dataset_summary": "Dataset automatically created during the evaluation run of model [Josephgflowers/Cinder-1.3B-Test](https://huggingface.co/Josephgflowers/Cinder-1.3B-Test) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Josephgflowers__Cinder-1.3B-Test\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T21:51:49.916130](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__Cinder-1.3B-Test/blob/main/results_2024-01-24T21-51-49.916130.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26857494600586107,\n \"acc_stderr\": 0.031129838187826393,\n \"acc_norm\": 0.26979883818482703,\n \"acc_norm_stderr\": 0.03187851296093977,\n \"mc1\": 0.2350061199510404,\n \"mc1_stderr\": 0.014843061507731618,\n \"mc2\": 0.36618235005901484,\n \"mc2_stderr\": 0.014414717099766074\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.30887372013651876,\n \"acc_stderr\": 0.013501770929344004,\n \"acc_norm\": 0.3319112627986348,\n \"acc_norm_stderr\": 0.013760988200880536\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4251145190201155,\n \"acc_stderr\": 0.004933500261683597,\n \"acc_norm\": 0.5547699661422028,\n \"acc_norm_stderr\": 0.004959754882055467\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.034554737023254366,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.034554737023254366\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21710526315789475,\n \"acc_stderr\": 0.03355045304882921,\n \"acc_norm\": 0.21710526315789475,\n \"acc_norm_stderr\": 0.03355045304882921\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2490566037735849,\n \"acc_stderr\": 0.0266164829805017,\n \"acc_norm\": 0.2490566037735849,\n \"acc_norm_stderr\": 0.0266164829805017\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.20833333333333334,\n \"acc_stderr\": 0.033961162058453336,\n \"acc_norm\": 0.20833333333333334,\n \"acc_norm_stderr\": 0.033961162058453336\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.16,\n \"acc_stderr\": 0.0368452949177471,\n \"acc_norm\": 0.16,\n \"acc_norm_stderr\": 0.0368452949177471\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.28901734104046245,\n \"acc_stderr\": 0.03456425745087001,\n \"acc_norm\": 0.28901734104046245,\n \"acc_norm_stderr\": 0.03456425745087001\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617748,\n \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617748\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.225531914893617,\n \"acc_stderr\": 0.027321078417387536,\n \"acc_norm\": 0.225531914893617,\n \"acc_norm_stderr\": 0.027321078417387536\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2689655172413793,\n \"acc_stderr\": 0.036951833116502325,\n \"acc_norm\": 0.2689655172413793,\n \"acc_norm_stderr\": 0.036951833116502325\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.022182037202948368,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.022182037202948368\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.19047619047619047,\n \"acc_stderr\": 0.03512207412302054,\n \"acc_norm\": 0.19047619047619047,\n \"acc_norm_stderr\": 0.03512207412302054\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.26129032258064516,\n \"acc_stderr\": 0.024993053397764822,\n \"acc_norm\": 0.26129032258064516,\n \"acc_norm_stderr\": 0.024993053397764822\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.03144712581678242,\n \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.03144712581678242\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2727272727272727,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.18686868686868688,\n \"acc_stderr\": 0.02777253333421899,\n \"acc_norm\": 0.18686868686868688,\n \"acc_norm_stderr\": 0.02777253333421899\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.36787564766839376,\n \"acc_stderr\": 0.034801756684660366,\n \"acc_norm\": 0.36787564766839376,\n \"acc_norm_stderr\": 0.034801756684660366\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3435897435897436,\n \"acc_stderr\": 0.024078696580635474,\n \"acc_norm\": 0.3435897435897436,\n \"acc_norm_stderr\": 0.024078696580635474\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085622,\n \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085622\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.2689075630252101,\n \"acc_stderr\": 0.028801392193631276,\n \"acc_norm\": 0.2689075630252101,\n \"acc_norm_stderr\": 0.028801392193631276\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\": 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.21651376146788992,\n \"acc_stderr\": 0.01765871059444313,\n \"acc_norm\": 0.21651376146788992,\n \"acc_norm_stderr\": 0.01765871059444313\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.03388857118502325,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.03388857118502325\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.03096451792692341,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.03096451792692341\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2616033755274262,\n \"acc_stderr\": 0.028609516716994934,\n \"acc_norm\": 0.2616033755274262,\n \"acc_norm_stderr\": 0.028609516716994934\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3452914798206278,\n \"acc_stderr\": 0.03191100192835794,\n \"acc_norm\": 0.3452914798206278,\n \"acc_norm_stderr\": 0.03191100192835794\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728742,\n \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728742\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794089,\n \"acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794089\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.294478527607362,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.294478527607362,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.22321428571428573,\n \"acc_stderr\": 0.039523019677025116,\n \"acc_norm\": 0.22321428571428573,\n \"acc_norm_stderr\": 0.039523019677025116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.22330097087378642,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.22330097087378642,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.3034188034188034,\n \"acc_stderr\": 0.030118210106942635,\n \"acc_norm\": 0.3034188034188034,\n \"acc_norm_stderr\": 0.030118210106942635\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26947637292464877,\n \"acc_stderr\": 0.015866243073215047,\n \"acc_norm\": 0.26947637292464877,\n \"acc_norm_stderr\": 0.015866243073215047\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2861271676300578,\n \"acc_stderr\": 0.02433214677913413,\n \"acc_norm\": 0.2861271676300578,\n \"acc_norm_stderr\": 0.02433214677913413\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23910614525139665,\n \"acc_stderr\": 0.01426555419233114,\n \"acc_norm\": 0.23910614525139665,\n \"acc_norm_stderr\": 0.01426555419233114\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.238562091503268,\n \"acc_stderr\": 0.02440439492808787,\n \"acc_norm\": 0.238562091503268,\n \"acc_norm_stderr\": 0.02440439492808787\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3054662379421222,\n \"acc_stderr\": 0.02616058445014049,\n \"acc_norm\": 0.3054662379421222,\n \"acc_norm_stderr\": 0.02616058445014049\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2345679012345679,\n \"acc_stderr\": 0.023576881744005726,\n \"acc_norm\": 0.2345679012345679,\n \"acc_norm_stderr\": 0.023576881744005726\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2695035460992908,\n \"acc_stderr\": 0.026469036818590617,\n \"acc_norm\": 0.2695035460992908,\n \"acc_norm_stderr\": 0.026469036818590617\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24511082138200782,\n \"acc_stderr\": 0.010986307870045503,\n \"acc_norm\": 0.24511082138200782,\n \"acc_norm_stderr\": 0.010986307870045503\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3897058823529412,\n \"acc_stderr\": 0.02962466358115969,\n \"acc_norm\": 0.3897058823529412,\n \"acc_norm_stderr\": 0.02962466358115969\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.017401816711427653,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.017401816711427653\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.038313051408846034,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.038313051408846034\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2816326530612245,\n \"acc_stderr\": 0.02879518557429128,\n \"acc_norm\": 0.2816326530612245,\n \"acc_norm_stderr\": 0.02879518557429128\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.2835820895522388,\n \"acc_stderr\": 0.03187187537919797,\n \"acc_norm\": 0.2835820895522388,\n \"acc_norm_stderr\": 0.03187187537919797\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.24096385542168675,\n \"acc_stderr\": 0.0332939411907353,\n \"acc_norm\": 0.24096385542168675,\n \"acc_norm_stderr\": 0.0332939411907353\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.27485380116959063,\n \"acc_stderr\": 0.034240429246915824,\n \"acc_norm\": 0.27485380116959063,\n \"acc_norm_stderr\": 0.034240429246915824\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2350061199510404,\n \"mc1_stderr\": 0.014843061507731618,\n \"mc2\": 0.36618235005901484,\n \"mc2_stderr\": 0.014414717099766074\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5895816890292028,\n \"acc_stderr\": 0.013825107120035863\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.028051554207733132,\n \"acc_stderr\": 0.004548229533836359\n }\n}\n```", "repo_url": "https://huggingface.co/Josephgflowers/Cinder-1.3B-Test", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|arc:challenge|25_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|gsm8k|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hellaswag|10_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T21-51-49.916130.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["**/details_harness|winogrande|5_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T21-51-49.916130.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T21_51_49.916130", "path": ["results_2024-01-24T21-51-49.916130.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T21-51-49.916130.parquet"]}]}]}
2024-01-24T21:54:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Josephgflowers/Cinder-1.3B-Test Dataset automatically created during the evaluation run of model Josephgflowers/Cinder-1.3B-Test on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T21:51:49.916130(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Josephgflowers/Cinder-1.3B-Test\n\n\n\nDataset automatically created during the evaluation run of model Josephgflowers/Cinder-1.3B-Test on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T21:51:49.916130(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Josephgflowers/Cinder-1.3B-Test\n\n\n\nDataset automatically created during the evaluation run of model Josephgflowers/Cinder-1.3B-Test on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T21:51:49.916130(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
f7997c45b73a26c1b54a697c9c8bb6a78c9e8b84
# Dataset Card for "CIDAR-EVAL-100" # CIDAR-EVAL-100 CIDAR-EVAL-100 contains **100** instructions about Arabic culture. The dataset can be used to evaluate an LLM for culturally relevant answers. ## 📚 Datasets Summary <table> <tr> <th>Name</th> <th>Explanation</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/arbml/cidar>CIDAR</a></t> <td>10,000 instructions and responses in Arabic</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/arbml/cidar-eval-100><b>CIDAR-EVAL-100</b></a></t> <td>100 instructions to evaluate LLMs on cultural relevance</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/arbml/cidar-mcq-100>CIDAR-MCQ-100</a></t> <td>100 Multiple choice questions and answers to evaluate LLMs on cultural relevance </td> </tr> </table> <div width="30px" align="center"> | Category | <a href=https://huggingface.co/datasets/arbml/cidar-eval-100><b>CIDAR-EVAL-100</b></a> | CIDAR-MCQ-100 | |----------|:-------------:|:------:| |Food&Drinks | 14 | 8 | |Names | 14 | 8 | |Animals | 2 | 4 | |Language | 10 | 20 | |Jokes&Puzzles | 3 | 7 | |Religion | 5 | 10 | |Business | 6 | 7 | |Cloths | 4 | 5 | |Science | 3 | 4 | |Sports&Games | 4 | 2 | |Tradition | 4 | 10 | |Weather | 4 | 2 | |Geography | 7 | 8 | |General | 4 | 3 | |Fonts | 5 | 2 | |Literature | 10 | 2 | |Plants | 3 | 0 | <i>Total</i> | 100 | 100 | </div> ## 📋 Dataset Structure - `Source(str)`: Source of the instruction. - `Sentence(str)`: Sentence of the instruction. - `Topic(str)`: Topic covered by the instruction. ## 📁 Loading The Dataset You can download the dataset directly from HuggingFace or use the following code: ```python from datasets import load_dataset cidar = load_dataset('arbml/CIDAR-EVAL-100') ``` ## 📄 Sample From The Dataset: **Source**: Manual **Sentence**: أخبرني عن أشهر أربعة حيوانات في المنطقة **Topic**: Animals ## 🔑 License The dataset is licensed under **Apache-2.0**. [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Citation ``` @misc{alyafeai2024cidar, title={{CIDAR: Culturally Relevant Instruction Dataset For Arabic}}, author={Zaid Alyafeai and Khalid Almubarak and Ahmed Ashraf and Deema Alnuhait and Saied Alshahrani and Gubran A. Q. Abdulrahman and Gamil Ahmed and Qais Gawah and Zead Saleh and Mustafa Ghaleb and Yousef Ali and Maged S. Al-Shaibani}, year={2024}, eprint={2402.03177}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
arbml/CIDAR-EVAL-100
[ "task_categories:text-generation", "size_categories:n<1K", "language:ar", "license:apache-2.0", "arxiv:2402.03177", "region:us" ]
2024-01-24T22:07:14+00:00
{"language": ["ar"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-generation"], "pretty_name": "CIDAR-EVAL-100", "dataset_info": {"features": [{"name": "Source", "dtype": "string"}, {"name": "Sentence", "dtype": "string"}, {"name": "Topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10696, "num_examples": 100}], "download_size": 6725, "dataset_size": 10696}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-14T15:46:10+00:00
[ "2402.03177" ]
[ "ar" ]
TAGS #task_categories-text-generation #size_categories-n<1K #language-Arabic #license-apache-2.0 #arxiv-2402.03177 #region-us
Dataset Card for "CIDAR-EVAL-100" ================================= CIDAR-EVAL-100 ============== CIDAR-EVAL-100 contains 100 instructions about Arabic culture. The dataset can be used to evaluate an LLM for culturally relevant answers. Datasets Summary ---------------- Dataset Structure ----------------- * 'Source(str)': Source of the instruction. * 'Sentence(str)': Sentence of the instruction. * 'Topic(str)': Topic covered by the instruction. Loading The Dataset ------------------- You can download the dataset directly from HuggingFace or use the following code: Sample From The Dataset: ------------------------ Source: Manual Sentence: أخبرني عن أشهر أربعة حيوانات في المنطقة Topic: Animals License ------- The dataset is licensed under Apache-2.0. Apache-2.0.
[]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #language-Arabic #license-apache-2.0 #arxiv-2402.03177 #region-us \n" ]
7fda89e4c0d91ca13b47c32349bcdabc35ac093f
# Dataset Card for UniRef50 UniRef50 data downloaded from https://www.uniprot.org/help/downloads on January 24, 2024.
bloyal/uniref50
[ "license:mit", "region:us" ]
2024-01-24T22:29:37+00:00
{"license": "mit", "dataset_info": {"features": [{"name": "ids", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19549591530, "num_examples": 62759891}], "download_size": 18546997577, "dataset_size": 19549591530}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-24T22:42:32+00:00
[]
[]
TAGS #license-mit #region-us
# Dataset Card for UniRef50 UniRef50 data downloaded from URL on January 24, 2024.
[ "# Dataset Card for UniRef50\n\nUniRef50 data downloaded from URL on January 24, 2024." ]
[ "TAGS\n#license-mit #region-us \n", "# Dataset Card for UniRef50\n\nUniRef50 data downloaded from URL on January 24, 2024." ]
56b1a738e9e63a5f3309732baa9c2efa092669fb
# Dataset Card for Evaluation run of Liangmingxin/ThetaWave-7B-sft <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Liangmingxin/ThetaWave-7B-sft](https://huggingface.co/Liangmingxin/ThetaWave-7B-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Liangmingxin__ThetaWave-7B-sft", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T22:28:33.084703](https://huggingface.co/datasets/open-llm-leaderboard/details_Liangmingxin__ThetaWave-7B-sft/blob/main/results_2024-01-24T22-28-33.084703.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6384596871336017, "acc_stderr": 0.03237779718234635, "acc_norm": 0.6411543747100734, "acc_norm_stderr": 0.03302750666871463, "mc1": 0.4222766217870257, "mc1_stderr": 0.017290733254248174, "mc2": 0.5974097857517422, "mc2_stderr": 0.015196659100037875 }, "harness|arc:challenge|25": { "acc": 0.5844709897610921, "acc_stderr": 0.014401366641216386, "acc_norm": 0.6313993174061433, "acc_norm_stderr": 0.0140978106780422 }, "harness|hellaswag|10": { "acc": 0.6447918741286597, "acc_stderr": 0.004775982650355921, "acc_norm": 0.8441545508862777, "acc_norm_stderr": 0.003619674864035022 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5851851851851851, "acc_stderr": 0.04256193767901408, "acc_norm": 0.5851851851851851, "acc_norm_stderr": 0.04256193767901408 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7056603773584905, "acc_stderr": 0.02804918631569525, "acc_norm": 0.7056603773584905, "acc_norm_stderr": 0.02804918631569525 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7430555555555556, "acc_stderr": 0.03653946969442099, "acc_norm": 0.7430555555555556, "acc_norm_stderr": 0.03653946969442099 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.653179190751445, "acc_stderr": 0.036291466701596636, "acc_norm": 0.653179190751445, "acc_norm_stderr": 0.036291466701596636 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107224, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107224 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816505, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5829787234042553, "acc_stderr": 0.03223276266711712, "acc_norm": 0.5829787234042553, "acc_norm_stderr": 0.03223276266711712 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.593103448275862, "acc_stderr": 0.04093793981266236, "acc_norm": 0.593103448275862, "acc_norm_stderr": 0.04093793981266236 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3968253968253968, "acc_stderr": 0.025197101074246487, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.025197101074246487 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42857142857142855, "acc_stderr": 0.0442626668137991, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.0442626668137991 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7548387096774194, "acc_stderr": 0.024472243840895525, "acc_norm": 0.7548387096774194, "acc_norm_stderr": 0.024472243840895525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5270935960591133, "acc_stderr": 0.03512819077876106, "acc_norm": 0.5270935960591133, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.64, "acc_stderr": 0.048241815132442176, "acc_norm": 0.64, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7757575757575758, "acc_stderr": 0.03256866661681102, "acc_norm": 0.7757575757575758, "acc_norm_stderr": 0.03256866661681102 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7878787878787878, "acc_stderr": 0.029126522834586815, "acc_norm": 0.7878787878787878, "acc_norm_stderr": 0.029126522834586815 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8704663212435233, "acc_stderr": 0.024233532297758723, "acc_norm": 0.8704663212435233, "acc_norm_stderr": 0.024233532297758723 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6384615384615384, "acc_stderr": 0.024359581465397, "acc_norm": 0.6384615384615384, "acc_norm_stderr": 0.024359581465397 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34444444444444444, "acc_stderr": 0.02897264888484427, "acc_norm": 0.34444444444444444, "acc_norm_stderr": 0.02897264888484427 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6932773109243697, "acc_stderr": 0.029953823891887037, "acc_norm": 0.6932773109243697, "acc_norm_stderr": 0.029953823891887037 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.36423841059602646, "acc_stderr": 0.03929111781242742, "acc_norm": 0.36423841059602646, "acc_norm_stderr": 0.03929111781242742 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8422018348623853, "acc_stderr": 0.01563002297009245, "acc_norm": 0.8422018348623853, "acc_norm_stderr": 0.01563002297009245 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5370370370370371, "acc_stderr": 0.03400603625538272, "acc_norm": 0.5370370370370371, "acc_norm_stderr": 0.03400603625538272 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7941176470588235, "acc_stderr": 0.028379449451588663, "acc_norm": 0.7941176470588235, "acc_norm_stderr": 0.028379449451588663 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8059071729957806, "acc_stderr": 0.025744902532290916, "acc_norm": 0.8059071729957806, "acc_norm_stderr": 0.025744902532290916 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.031381476375755, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.031381476375755 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8264462809917356, "acc_stderr": 0.0345727283691767, "acc_norm": 0.8264462809917356, "acc_norm_stderr": 0.0345727283691767 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.03957835471980979, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.03957835471980979 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7730061349693251, "acc_stderr": 0.03291099578615769, "acc_norm": 0.7730061349693251, "acc_norm_stderr": 0.03291099578615769 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.7669902912621359, "acc_stderr": 0.04185832598928315, "acc_norm": 0.7669902912621359, "acc_norm_stderr": 0.04185832598928315 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8846153846153846, "acc_stderr": 0.020930193185179333, "acc_norm": 0.8846153846153846, "acc_norm_stderr": 0.020930193185179333 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.04560480215720684, "acc_norm": 0.71, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8148148148148148, "acc_stderr": 0.013890862162876164, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.013890862162876164 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6994219653179191, "acc_stderr": 0.0246853168672578, "acc_norm": 0.6994219653179191, "acc_norm_stderr": 0.0246853168672578 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4424581005586592, "acc_stderr": 0.01661139368726858, "acc_norm": 0.4424581005586592, "acc_norm_stderr": 0.01661139368726858 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.738562091503268, "acc_stderr": 0.025160998214292452, "acc_norm": 0.738562091503268, "acc_norm_stderr": 0.025160998214292452 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7041800643086816, "acc_stderr": 0.025922371788818767, "acc_norm": 0.7041800643086816, "acc_norm_stderr": 0.025922371788818767 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7037037037037037, "acc_stderr": 0.025407197798890162, "acc_norm": 0.7037037037037037, "acc_norm_stderr": 0.025407197798890162 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4716312056737589, "acc_stderr": 0.02977945095730307, "acc_norm": 0.4716312056737589, "acc_norm_stderr": 0.02977945095730307 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4498044328552803, "acc_stderr": 0.012705721498565109, "acc_norm": 0.4498044328552803, "acc_norm_stderr": 0.012705721498565109 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6764705882352942, "acc_stderr": 0.02841820861940676, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.02841820861940676 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6601307189542484, "acc_stderr": 0.019162418588623553, "acc_norm": 0.6601307189542484, "acc_norm_stderr": 0.019162418588623553 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7142857142857143, "acc_stderr": 0.0289205832206756, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.0289205832206756 }, "harness|hendrycksTest-sociology|5": { "acc": 0.746268656716418, "acc_stderr": 0.03076944496729602, "acc_norm": 0.746268656716418, "acc_norm_stderr": 0.03076944496729602 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.03487350880197769, "acc_norm": 0.86, "acc_norm_stderr": 0.03487350880197769 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8304093567251462, "acc_stderr": 0.02878210810540171, "acc_norm": 0.8304093567251462, "acc_norm_stderr": 0.02878210810540171 }, "harness|truthfulqa:mc|0": { "mc1": 0.4222766217870257, "mc1_stderr": 0.017290733254248174, "mc2": 0.5974097857517422, "mc2_stderr": 0.015196659100037875 }, "harness|winogrande|5": { "acc": 0.7963693764798737, "acc_stderr": 0.011317798781626923 }, "harness|gsm8k|5": { "acc": 0.5678544351781653, "acc_stderr": 0.013645072137842445 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Liangmingxin__ThetaWave-7B-sft
[ "region:us" ]
2024-01-24T22:30:49+00:00
{"pretty_name": "Evaluation run of Liangmingxin/ThetaWave-7B-sft", "dataset_summary": "Dataset automatically created during the evaluation run of model [Liangmingxin/ThetaWave-7B-sft](https://huggingface.co/Liangmingxin/ThetaWave-7B-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Liangmingxin__ThetaWave-7B-sft\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T22:28:33.084703](https://huggingface.co/datasets/open-llm-leaderboard/details_Liangmingxin__ThetaWave-7B-sft/blob/main/results_2024-01-24T22-28-33.084703.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6384596871336017,\n \"acc_stderr\": 0.03237779718234635,\n \"acc_norm\": 0.6411543747100734,\n \"acc_norm_stderr\": 0.03302750666871463,\n \"mc1\": 0.4222766217870257,\n \"mc1_stderr\": 0.017290733254248174,\n \"mc2\": 0.5974097857517422,\n \"mc2_stderr\": 0.015196659100037875\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5844709897610921,\n \"acc_stderr\": 0.014401366641216386,\n \"acc_norm\": 0.6313993174061433,\n \"acc_norm_stderr\": 0.0140978106780422\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6447918741286597,\n \"acc_stderr\": 0.004775982650355921,\n \"acc_norm\": 0.8441545508862777,\n \"acc_norm_stderr\": 0.003619674864035022\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.025197101074246487,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.025197101074246487\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7548387096774194,\n \"acc_stderr\": 0.024472243840895525,\n \"acc_norm\": 0.7548387096774194,\n \"acc_norm_stderr\": 0.024472243840895525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586815,\n \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586815\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758723,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758723\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6384615384615384,\n \"acc_stderr\": 0.024359581465397,\n \"acc_norm\": 0.6384615384615384,\n \"acc_norm_stderr\": 0.024359581465397\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.029953823891887037,\n \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.029953823891887037\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8422018348623853,\n \"acc_stderr\": 0.01563002297009245,\n \"acc_norm\": 0.8422018348623853,\n \"acc_norm_stderr\": 0.01563002297009245\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588663,\n \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588663\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.031381476375755,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.031381476375755\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8264462809917356,\n \"acc_stderr\": 0.0345727283691767,\n \"acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.0345727283691767\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.03957835471980979,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.03957835471980979\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.013890862162876164,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.013890862162876164\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.0246853168672578,\n \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.0246853168672578\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4424581005586592,\n \"acc_stderr\": 0.01661139368726858,\n \"acc_norm\": 0.4424581005586592,\n \"acc_norm_stderr\": 0.01661139368726858\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292452,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292452\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n \"acc_stderr\": 0.025922371788818767,\n \"acc_norm\": 0.7041800643086816,\n \"acc_norm_stderr\": 0.025922371788818767\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.025407197798890162,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.025407197798890162\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.02977945095730307,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.02977945095730307\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4498044328552803,\n \"acc_stderr\": 0.012705721498565109,\n \"acc_norm\": 0.4498044328552803,\n \"acc_norm_stderr\": 0.012705721498565109\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.019162418588623553,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.019162418588623553\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.746268656716418,\n \"acc_stderr\": 0.03076944496729602,\n \"acc_norm\": 0.746268656716418,\n \"acc_norm_stderr\": 0.03076944496729602\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4222766217870257,\n \"mc1_stderr\": 0.017290733254248174,\n \"mc2\": 0.5974097857517422,\n \"mc2_stderr\": 0.015196659100037875\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7963693764798737,\n \"acc_stderr\": 0.011317798781626923\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5678544351781653,\n \"acc_stderr\": 0.013645072137842445\n }\n}\n```", "repo_url": "https://huggingface.co/Liangmingxin/ThetaWave-7B-sft", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|arc:challenge|25_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|gsm8k|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hellaswag|10_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T22-28-33.084703.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["**/details_harness|winogrande|5_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T22-28-33.084703.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T22_28_33.084703", "path": ["results_2024-01-24T22-28-33.084703.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T22-28-33.084703.parquet"]}]}]}
2024-01-24T22:31:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Liangmingxin/ThetaWave-7B-sft Dataset automatically created during the evaluation run of model Liangmingxin/ThetaWave-7B-sft on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T22:28:33.084703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Liangmingxin/ThetaWave-7B-sft\n\n\n\nDataset automatically created during the evaluation run of model Liangmingxin/ThetaWave-7B-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T22:28:33.084703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Liangmingxin/ThetaWave-7B-sft\n\n\n\nDataset automatically created during the evaluation run of model Liangmingxin/ThetaWave-7B-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T22:28:33.084703(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
4c3ff968924ca09792a4a1d29f14ee402f072da6
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-without-mctaco-1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T22:38:44.232137](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-without-mctaco-1/blob/main/results_2024-01-24T22-38-44.232137.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.47390153459197415, "acc_stderr": 0.034488650414693046, "acc_norm": 0.47990494504266334, "acc_norm_stderr": 0.03528061933639502, "mc1": 0.26193390452876375, "mc1_stderr": 0.015392118805015021, "mc2": 0.39945852317616254, "mc2_stderr": 0.01392826930362614 }, "harness|arc:challenge|25": { "acc": 0.4539249146757679, "acc_stderr": 0.014549221105171865, "acc_norm": 0.4948805460750853, "acc_norm_stderr": 0.01461062489030916 }, "harness|hellaswag|10": { "acc": 0.5613423620792671, "acc_stderr": 0.004952087083128905, "acc_norm": 0.7593108942441744, "acc_norm_stderr": 0.0042662819001443916 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4666666666666667, "acc_stderr": 0.043097329010363554, "acc_norm": 0.4666666666666667, "acc_norm_stderr": 0.043097329010363554 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.4868421052631579, "acc_stderr": 0.04067533136309173, "acc_norm": 0.4868421052631579, "acc_norm_stderr": 0.04067533136309173 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5056603773584906, "acc_stderr": 0.030770900763851323, "acc_norm": 0.5056603773584906, "acc_norm_stderr": 0.030770900763851323 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.4861111111111111, "acc_stderr": 0.041795966175810016, "acc_norm": 0.4861111111111111, "acc_norm_stderr": 0.041795966175810016 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.42, "acc_stderr": 0.04960449637488584, "acc_norm": 0.42, "acc_norm_stderr": 0.04960449637488584 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.41, "acc_stderr": 0.04943110704237102, "acc_norm": 0.41, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.4682080924855491, "acc_stderr": 0.03804749744364763, "acc_norm": 0.4682080924855491, "acc_norm_stderr": 0.03804749744364763 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.20588235294117646, "acc_stderr": 0.04023382273617747, "acc_norm": 0.20588235294117646, "acc_norm_stderr": 0.04023382273617747 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.59, "acc_stderr": 0.049431107042371025, "acc_norm": 0.59, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4595744680851064, "acc_stderr": 0.03257901482099834, "acc_norm": 0.4595744680851064, "acc_norm_stderr": 0.03257901482099834 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2982456140350877, "acc_stderr": 0.04303684033537315, "acc_norm": 0.2982456140350877, "acc_norm_stderr": 0.04303684033537315 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.4413793103448276, "acc_stderr": 0.04137931034482758, "acc_norm": 0.4413793103448276, "acc_norm_stderr": 0.04137931034482758 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30423280423280424, "acc_stderr": 0.02369541500946309, "acc_norm": 0.30423280423280424, "acc_norm_stderr": 0.02369541500946309 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2857142857142857, "acc_stderr": 0.0404061017820884, "acc_norm": 0.2857142857142857, "acc_norm_stderr": 0.0404061017820884 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5161290322580645, "acc_stderr": 0.028429203176724555, "acc_norm": 0.5161290322580645, "acc_norm_stderr": 0.028429203176724555 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.35467980295566504, "acc_stderr": 0.033661244890514495, "acc_norm": 0.35467980295566504, "acc_norm_stderr": 0.033661244890514495 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6, "acc_stderr": 0.03825460278380026, "acc_norm": 0.6, "acc_norm_stderr": 0.03825460278380026 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.5757575757575758, "acc_stderr": 0.03521224908841586, "acc_norm": 0.5757575757575758, "acc_norm_stderr": 0.03521224908841586 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.6632124352331606, "acc_stderr": 0.03410780251836183, "acc_norm": 0.6632124352331606, "acc_norm_stderr": 0.03410780251836183 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.4307692307692308, "acc_stderr": 0.02510682066053975, "acc_norm": 0.4307692307692308, "acc_norm_stderr": 0.02510682066053975 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.25925925925925924, "acc_stderr": 0.026719240783712166, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.026719240783712166 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.47478991596638653, "acc_stderr": 0.0324371805513741, "acc_norm": 0.47478991596638653, "acc_norm_stderr": 0.0324371805513741 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.26490066225165565, "acc_stderr": 0.03603038545360384, "acc_norm": 0.26490066225165565, "acc_norm_stderr": 0.03603038545360384 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.6385321100917432, "acc_stderr": 0.020598082009937384, "acc_norm": 0.6385321100917432, "acc_norm_stderr": 0.020598082009937384 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.35648148148148145, "acc_stderr": 0.032664783315272714, "acc_norm": 0.35648148148148145, "acc_norm_stderr": 0.032664783315272714 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.5833333333333334, "acc_stderr": 0.03460228327239171, "acc_norm": 0.5833333333333334, "acc_norm_stderr": 0.03460228327239171 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.5949367088607594, "acc_stderr": 0.031955147413706704, "acc_norm": 0.5949367088607594, "acc_norm_stderr": 0.031955147413706704 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6143497757847534, "acc_stderr": 0.03266842214289201, "acc_norm": 0.6143497757847534, "acc_norm_stderr": 0.03266842214289201 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6106870229007634, "acc_stderr": 0.04276486542814591, "acc_norm": 0.6106870229007634, "acc_norm_stderr": 0.04276486542814591 }, "harness|hendrycksTest-international_law|5": { "acc": 0.5619834710743802, "acc_stderr": 0.04529146804435792, "acc_norm": 0.5619834710743802, "acc_norm_stderr": 0.04529146804435792 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.5277777777777778, "acc_stderr": 0.048262172941398944, "acc_norm": 0.5277777777777778, "acc_norm_stderr": 0.048262172941398944 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.44785276073619634, "acc_stderr": 0.03906947479456601, "acc_norm": 0.44785276073619634, "acc_norm_stderr": 0.03906947479456601 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.39285714285714285, "acc_stderr": 0.046355501356099754, "acc_norm": 0.39285714285714285, "acc_norm_stderr": 0.046355501356099754 }, "harness|hendrycksTest-management|5": { "acc": 0.5922330097087378, "acc_stderr": 0.0486577757041077, "acc_norm": 0.5922330097087378, "acc_norm_stderr": 0.0486577757041077 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7136752136752137, "acc_stderr": 0.02961432369045665, "acc_norm": 0.7136752136752137, "acc_norm_stderr": 0.02961432369045665 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.6602809706257982, "acc_stderr": 0.01693639411430162, "acc_norm": 0.6602809706257982, "acc_norm_stderr": 0.01693639411430162 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5375722543352601, "acc_stderr": 0.026842985519615375, "acc_norm": 0.5375722543352601, "acc_norm_stderr": 0.026842985519615375 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5098039215686274, "acc_stderr": 0.028624412550167958, "acc_norm": 0.5098039215686274, "acc_norm_stderr": 0.028624412550167958 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5594855305466238, "acc_stderr": 0.02819640057419743, "acc_norm": 0.5594855305466238, "acc_norm_stderr": 0.02819640057419743 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5154320987654321, "acc_stderr": 0.0278074900442762, "acc_norm": 0.5154320987654321, "acc_norm_stderr": 0.0278074900442762 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.3723404255319149, "acc_stderr": 0.028838921471251455, "acc_norm": 0.3723404255319149, "acc_norm_stderr": 0.028838921471251455 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.33833116036505867, "acc_stderr": 0.012084265626344199, "acc_norm": 0.33833116036505867, "acc_norm_stderr": 0.012084265626344199 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5367647058823529, "acc_stderr": 0.030290619180485694, "acc_norm": 0.5367647058823529, "acc_norm_stderr": 0.030290619180485694 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4297385620915033, "acc_stderr": 0.020027122784928547, "acc_norm": 0.4297385620915033, "acc_norm_stderr": 0.020027122784928547 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.5636363636363636, "acc_stderr": 0.04750185058907296, "acc_norm": 0.5636363636363636, "acc_norm_stderr": 0.04750185058907296 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5428571428571428, "acc_stderr": 0.031891418324213966, "acc_norm": 0.5428571428571428, "acc_norm_stderr": 0.031891418324213966 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6567164179104478, "acc_stderr": 0.03357379665433431, "acc_norm": 0.6567164179104478, "acc_norm_stderr": 0.03357379665433431 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.65, "acc_stderr": 0.047937248544110196, "acc_norm": 0.65, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-virology|5": { "acc": 0.39156626506024095, "acc_stderr": 0.03799857454479636, "acc_norm": 0.39156626506024095, "acc_norm_stderr": 0.03799857454479636 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.6549707602339181, "acc_stderr": 0.036459813773888065, "acc_norm": 0.6549707602339181, "acc_norm_stderr": 0.036459813773888065 }, "harness|truthfulqa:mc|0": { "mc1": 0.26193390452876375, "mc1_stderr": 0.015392118805015021, "mc2": 0.39945852317616254, "mc2_stderr": 0.01392826930362614 }, "harness|winogrande|5": { "acc": 0.7134964483030781, "acc_stderr": 0.01270703013996038 }, "harness|gsm8k|5": { "acc": 0.11902956785443518, "acc_stderr": 0.008919702911161632 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-without-mctaco-1
[ "region:us" ]
2024-01-24T22:41:01+00:00
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-without-mctaco-1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T22:38:44.232137](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-without-mctaco-1/blob/main/results_2024-01-24T22-38-44.232137.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.47390153459197415,\n \"acc_stderr\": 0.034488650414693046,\n \"acc_norm\": 0.47990494504266334,\n \"acc_norm_stderr\": 0.03528061933639502,\n \"mc1\": 0.26193390452876375,\n \"mc1_stderr\": 0.015392118805015021,\n \"mc2\": 0.39945852317616254,\n \"mc2_stderr\": 0.01392826930362614\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4539249146757679,\n \"acc_stderr\": 0.014549221105171865,\n \"acc_norm\": 0.4948805460750853,\n \"acc_norm_stderr\": 0.01461062489030916\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5613423620792671,\n \"acc_stderr\": 0.004952087083128905,\n \"acc_norm\": 0.7593108942441744,\n \"acc_norm_stderr\": 0.0042662819001443916\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4868421052631579,\n \"acc_stderr\": 0.04067533136309173,\n \"acc_norm\": 0.4868421052631579,\n \"acc_norm_stderr\": 0.04067533136309173\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5056603773584906,\n \"acc_stderr\": 0.030770900763851323,\n \"acc_norm\": 0.5056603773584906,\n \"acc_norm_stderr\": 0.030770900763851323\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4861111111111111,\n \"acc_stderr\": 0.041795966175810016,\n \"acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.041795966175810016\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4682080924855491,\n \"acc_stderr\": 0.03804749744364763,\n \"acc_norm\": 0.4682080924855491,\n \"acc_norm_stderr\": 0.03804749744364763\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617747,\n \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617747\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4595744680851064,\n \"acc_stderr\": 0.03257901482099834,\n \"acc_norm\": 0.4595744680851064,\n \"acc_norm_stderr\": 0.03257901482099834\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2982456140350877,\n \"acc_stderr\": 0.04303684033537315,\n \"acc_norm\": 0.2982456140350877,\n \"acc_norm_stderr\": 0.04303684033537315\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4413793103448276,\n \"acc_stderr\": 0.04137931034482758,\n \"acc_norm\": 0.4413793103448276,\n \"acc_norm_stderr\": 0.04137931034482758\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30423280423280424,\n \"acc_stderr\": 0.02369541500946309,\n \"acc_norm\": 0.30423280423280424,\n \"acc_norm_stderr\": 0.02369541500946309\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.0404061017820884,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.0404061017820884\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5161290322580645,\n \"acc_stderr\": 0.028429203176724555,\n \"acc_norm\": 0.5161290322580645,\n \"acc_norm_stderr\": 0.028429203176724555\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.35467980295566504,\n \"acc_stderr\": 0.033661244890514495,\n \"acc_norm\": 0.35467980295566504,\n \"acc_norm_stderr\": 0.033661244890514495\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03825460278380026,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03825460278380026\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5757575757575758,\n \"acc_stderr\": 0.03521224908841586,\n \"acc_norm\": 0.5757575757575758,\n \"acc_norm_stderr\": 0.03521224908841586\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.6632124352331606,\n \"acc_stderr\": 0.03410780251836183,\n \"acc_norm\": 0.6632124352331606,\n \"acc_norm_stderr\": 0.03410780251836183\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4307692307692308,\n \"acc_stderr\": 0.02510682066053975,\n \"acc_norm\": 0.4307692307692308,\n \"acc_norm_stderr\": 0.02510682066053975\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712166,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712166\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.47478991596638653,\n \"acc_stderr\": 0.0324371805513741,\n \"acc_norm\": 0.47478991596638653,\n \"acc_norm_stderr\": 0.0324371805513741\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.26490066225165565,\n \"acc_stderr\": 0.03603038545360384,\n \"acc_norm\": 0.26490066225165565,\n \"acc_norm_stderr\": 0.03603038545360384\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6385321100917432,\n \"acc_stderr\": 0.020598082009937384,\n \"acc_norm\": 0.6385321100917432,\n \"acc_norm_stderr\": 0.020598082009937384\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.35648148148148145,\n \"acc_stderr\": 0.032664783315272714,\n \"acc_norm\": 0.35648148148148145,\n \"acc_norm_stderr\": 0.032664783315272714\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.03460228327239171,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.03460228327239171\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.5949367088607594,\n \"acc_stderr\": 0.031955147413706704,\n \"acc_norm\": 0.5949367088607594,\n \"acc_norm_stderr\": 0.031955147413706704\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6143497757847534,\n \"acc_stderr\": 0.03266842214289201,\n \"acc_norm\": 0.6143497757847534,\n \"acc_norm_stderr\": 0.03266842214289201\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6106870229007634,\n \"acc_stderr\": 0.04276486542814591,\n \"acc_norm\": 0.6106870229007634,\n \"acc_norm_stderr\": 0.04276486542814591\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5619834710743802,\n \"acc_stderr\": 0.04529146804435792,\n \"acc_norm\": 0.5619834710743802,\n \"acc_norm_stderr\": 0.04529146804435792\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.048262172941398944,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.048262172941398944\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.44785276073619634,\n \"acc_stderr\": 0.03906947479456601,\n \"acc_norm\": 0.44785276073619634,\n \"acc_norm_stderr\": 0.03906947479456601\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n \"acc_stderr\": 0.046355501356099754,\n \"acc_norm\": 0.39285714285714285,\n \"acc_norm_stderr\": 0.046355501356099754\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.5922330097087378,\n \"acc_stderr\": 0.0486577757041077,\n \"acc_norm\": 0.5922330097087378,\n \"acc_norm_stderr\": 0.0486577757041077\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7136752136752137,\n \"acc_stderr\": 0.02961432369045665,\n \"acc_norm\": 0.7136752136752137,\n \"acc_norm_stderr\": 0.02961432369045665\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6602809706257982,\n \"acc_stderr\": 0.01693639411430162,\n \"acc_norm\": 0.6602809706257982,\n \"acc_norm_stderr\": 0.01693639411430162\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5375722543352601,\n \"acc_stderr\": 0.026842985519615375,\n \"acc_norm\": 0.5375722543352601,\n \"acc_norm_stderr\": 0.026842985519615375\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5098039215686274,\n \"acc_stderr\": 0.028624412550167958,\n \"acc_norm\": 0.5098039215686274,\n \"acc_norm_stderr\": 0.028624412550167958\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5594855305466238,\n \"acc_stderr\": 0.02819640057419743,\n \"acc_norm\": 0.5594855305466238,\n \"acc_norm_stderr\": 0.02819640057419743\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5154320987654321,\n \"acc_stderr\": 0.0278074900442762,\n \"acc_norm\": 0.5154320987654321,\n \"acc_norm_stderr\": 0.0278074900442762\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3723404255319149,\n \"acc_stderr\": 0.028838921471251455,\n \"acc_norm\": 0.3723404255319149,\n \"acc_norm_stderr\": 0.028838921471251455\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.33833116036505867,\n \"acc_stderr\": 0.012084265626344199,\n \"acc_norm\": 0.33833116036505867,\n \"acc_norm_stderr\": 0.012084265626344199\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5367647058823529,\n \"acc_stderr\": 0.030290619180485694,\n \"acc_norm\": 0.5367647058823529,\n \"acc_norm_stderr\": 0.030290619180485694\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4297385620915033,\n \"acc_stderr\": 0.020027122784928547,\n \"acc_norm\": 0.4297385620915033,\n \"acc_norm_stderr\": 0.020027122784928547\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5636363636363636,\n \"acc_stderr\": 0.04750185058907296,\n \"acc_norm\": 0.5636363636363636,\n \"acc_norm_stderr\": 0.04750185058907296\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5428571428571428,\n \"acc_stderr\": 0.031891418324213966,\n \"acc_norm\": 0.5428571428571428,\n \"acc_norm_stderr\": 0.031891418324213966\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6567164179104478,\n \"acc_stderr\": 0.03357379665433431,\n \"acc_norm\": 0.6567164179104478,\n \"acc_norm_stderr\": 0.03357379665433431\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.39156626506024095,\n \"acc_stderr\": 0.03799857454479636,\n \"acc_norm\": 0.39156626506024095,\n \"acc_norm_stderr\": 0.03799857454479636\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.6549707602339181,\n \"acc_stderr\": 0.036459813773888065,\n \"acc_norm\": 0.6549707602339181,\n \"acc_norm_stderr\": 0.036459813773888065\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.26193390452876375,\n \"mc1_stderr\": 0.015392118805015021,\n \"mc2\": 0.39945852317616254,\n \"mc2_stderr\": 0.01392826930362614\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7134964483030781,\n \"acc_stderr\": 0.01270703013996038\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11902956785443518,\n \"acc_stderr\": 0.008919702911161632\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|arc:challenge|25_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|gsm8k|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hellaswag|10_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T22-38-44.232137.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["**/details_harness|winogrande|5_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T22-38-44.232137.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T22_38_44.232137", "path": ["results_2024-01-24T22-38-44.232137.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T22-38-44.232137.parquet"]}]}]}
2024-01-24T22:41:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1 Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T22:38:44.232137(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1\n\n\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T22:38:44.232137(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1\n\n\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-temporal-without-mctaco-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T22:38:44.232137(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
3fe513cf11772eadaaae63c837d8c2d444b44c0f
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-sharegpt", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T22:39:12.768108](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-sharegpt/blob/main/results_2024-01-24T22-39-12.768108.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.506306432089862, "acc_stderr": 0.03425685639219852, "acc_norm": 0.512609443095517, "acc_norm_stderr": 0.03502265778395153, "mc1": 0.2913096695226438, "mc1_stderr": 0.015905987048184828, "mc2": 0.4475004397320104, "mc2_stderr": 0.014712894493806422 }, "harness|arc:challenge|25": { "acc": 0.4872013651877133, "acc_stderr": 0.014606603181012538, "acc_norm": 0.5349829351535836, "acc_norm_stderr": 0.014575583922019669 }, "harness|hellaswag|10": { "acc": 0.5598486357299343, "acc_stderr": 0.004953907062096598, "acc_norm": 0.7582154949213304, "acc_norm_stderr": 0.004272893583742265 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.044619604333847415, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847415 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4888888888888889, "acc_stderr": 0.04318275491977976, "acc_norm": 0.4888888888888889, "acc_norm_stderr": 0.04318275491977976 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.4934210526315789, "acc_stderr": 0.04068590050224971, "acc_norm": 0.4934210526315789, "acc_norm_stderr": 0.04068590050224971 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5358490566037736, "acc_stderr": 0.030693675018458003, "acc_norm": 0.5358490566037736, "acc_norm_stderr": 0.030693675018458003 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5208333333333334, "acc_stderr": 0.041775789507399935, "acc_norm": 0.5208333333333334, "acc_norm_stderr": 0.041775789507399935 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.35, "acc_stderr": 0.04793724854411019, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411019 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.49710982658959535, "acc_stderr": 0.03812400565974833, "acc_norm": 0.49710982658959535, "acc_norm_stderr": 0.03812400565974833 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.18627450980392157, "acc_stderr": 0.03873958714149352, "acc_norm": 0.18627450980392157, "acc_norm_stderr": 0.03873958714149352 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4765957446808511, "acc_stderr": 0.032650194750335815, "acc_norm": 0.4765957446808511, "acc_norm_stderr": 0.032650194750335815 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.30701754385964913, "acc_stderr": 0.043391383225798615, "acc_norm": 0.30701754385964913, "acc_norm_stderr": 0.043391383225798615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.4827586206896552, "acc_stderr": 0.04164188720169377, "acc_norm": 0.4827586206896552, "acc_norm_stderr": 0.04164188720169377 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30423280423280424, "acc_stderr": 0.02369541500946309, "acc_norm": 0.30423280423280424, "acc_norm_stderr": 0.02369541500946309 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.29365079365079366, "acc_stderr": 0.04073524322147125, "acc_norm": 0.29365079365079366, "acc_norm_stderr": 0.04073524322147125 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5612903225806452, "acc_stderr": 0.028229497320317223, "acc_norm": 0.5612903225806452, "acc_norm_stderr": 0.028229497320317223 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.41379310344827586, "acc_stderr": 0.03465304488406795, "acc_norm": 0.41379310344827586, "acc_norm_stderr": 0.03465304488406795 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6181818181818182, "acc_stderr": 0.03793713171165634, "acc_norm": 0.6181818181818182, "acc_norm_stderr": 0.03793713171165634 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6060606060606061, "acc_stderr": 0.03481285338232963, "acc_norm": 0.6060606060606061, "acc_norm_stderr": 0.03481285338232963 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7046632124352331, "acc_stderr": 0.03292296639155141, "acc_norm": 0.7046632124352331, "acc_norm_stderr": 0.03292296639155141 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.45897435897435895, "acc_stderr": 0.025265525491284295, "acc_norm": 0.45897435897435895, "acc_norm_stderr": 0.025265525491284295 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.23333333333333334, "acc_stderr": 0.025787874220959323, "acc_norm": 0.23333333333333334, "acc_norm_stderr": 0.025787874220959323 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.47478991596638653, "acc_stderr": 0.0324371805513741, "acc_norm": 0.47478991596638653, "acc_norm_stderr": 0.0324371805513741 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.304635761589404, "acc_stderr": 0.037579499229433426, "acc_norm": 0.304635761589404, "acc_norm_stderr": 0.037579499229433426 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.691743119266055, "acc_stderr": 0.019798366698367254, "acc_norm": 0.691743119266055, "acc_norm_stderr": 0.019798366698367254 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.39351851851851855, "acc_stderr": 0.03331747876370312, "acc_norm": 0.39351851851851855, "acc_norm_stderr": 0.03331747876370312 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7205882352941176, "acc_stderr": 0.03149328104507957, "acc_norm": 0.7205882352941176, "acc_norm_stderr": 0.03149328104507957 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.6751054852320675, "acc_stderr": 0.030486039389105296, "acc_norm": 0.6751054852320675, "acc_norm_stderr": 0.030486039389105296 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6233183856502242, "acc_stderr": 0.032521134899291884, "acc_norm": 0.6233183856502242, "acc_norm_stderr": 0.032521134899291884 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6183206106870229, "acc_stderr": 0.0426073515764456, "acc_norm": 0.6183206106870229, "acc_norm_stderr": 0.0426073515764456 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6033057851239669, "acc_stderr": 0.044658697805310094, "acc_norm": 0.6033057851239669, "acc_norm_stderr": 0.044658697805310094 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.5555555555555556, "acc_stderr": 0.04803752235190193, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.04803752235190193 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5214723926380368, "acc_stderr": 0.03924746876751129, "acc_norm": 0.5214723926380368, "acc_norm_stderr": 0.03924746876751129 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4107142857142857, "acc_stderr": 0.04669510663875191, "acc_norm": 0.4107142857142857, "acc_norm_stderr": 0.04669510663875191 }, "harness|hendrycksTest-management|5": { "acc": 0.6699029126213593, "acc_stderr": 0.0465614711001235, "acc_norm": 0.6699029126213593, "acc_norm_stderr": 0.0465614711001235 }, "harness|hendrycksTest-marketing|5": { "acc": 0.782051282051282, "acc_stderr": 0.027046857630716677, "acc_norm": 0.782051282051282, "acc_norm_stderr": 0.027046857630716677 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.57, "acc_stderr": 0.04975698519562429, "acc_norm": 0.57, "acc_norm_stderr": 0.04975698519562429 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.6998722860791826, "acc_stderr": 0.016389249691317425, "acc_norm": 0.6998722860791826, "acc_norm_stderr": 0.016389249691317425 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5751445086705202, "acc_stderr": 0.026613350840261746, "acc_norm": 0.5751445086705202, "acc_norm_stderr": 0.026613350840261746 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.26033519553072626, "acc_stderr": 0.014676252009319476, "acc_norm": 0.26033519553072626, "acc_norm_stderr": 0.014676252009319476 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5588235294117647, "acc_stderr": 0.028431095444176643, "acc_norm": 0.5588235294117647, "acc_norm_stderr": 0.028431095444176643 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6012861736334405, "acc_stderr": 0.027809322585774496, "acc_norm": 0.6012861736334405, "acc_norm_stderr": 0.027809322585774496 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5432098765432098, "acc_stderr": 0.02771666165019404, "acc_norm": 0.5432098765432098, "acc_norm_stderr": 0.02771666165019404 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.3829787234042553, "acc_stderr": 0.02899908090480618, "acc_norm": 0.3829787234042553, "acc_norm_stderr": 0.02899908090480618 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.3617992177314211, "acc_stderr": 0.012272736233262938, "acc_norm": 0.3617992177314211, "acc_norm_stderr": 0.012272736233262938 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5551470588235294, "acc_stderr": 0.030187532060329387, "acc_norm": 0.5551470588235294, "acc_norm_stderr": 0.030187532060329387 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.47058823529411764, "acc_stderr": 0.02019280827143379, "acc_norm": 0.47058823529411764, "acc_norm_stderr": 0.02019280827143379 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6272727272727273, "acc_stderr": 0.04631381319425465, "acc_norm": 0.6272727272727273, "acc_norm_stderr": 0.04631381319425465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5836734693877551, "acc_stderr": 0.031557828165561644, "acc_norm": 0.5836734693877551, "acc_norm_stderr": 0.031557828165561644 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6616915422885572, "acc_stderr": 0.03345563070339193, "acc_norm": 0.6616915422885572, "acc_norm_stderr": 0.03345563070339193 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-virology|5": { "acc": 0.42168674698795183, "acc_stderr": 0.03844453181770917, "acc_norm": 0.42168674698795183, "acc_norm_stderr": 0.03844453181770917 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7192982456140351, "acc_stderr": 0.034462962170884265, "acc_norm": 0.7192982456140351, "acc_norm_stderr": 0.034462962170884265 }, "harness|truthfulqa:mc|0": { "mc1": 0.2913096695226438, "mc1_stderr": 0.015905987048184828, "mc2": 0.4475004397320104, "mc2_stderr": 0.014712894493806422 }, "harness|winogrande|5": { "acc": 0.7269139700078927, "acc_stderr": 0.012522020105869457 }, "harness|gsm8k|5": { "acc": 0.1599696739954511, "acc_stderr": 0.010097377827752538 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-sharegpt
[ "region:us" ]
2024-01-24T22:41:30+00:00
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-sharegpt\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T22:39:12.768108](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-temporal-sharegpt/blob/main/results_2024-01-24T22-39-12.768108.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.506306432089862,\n \"acc_stderr\": 0.03425685639219852,\n \"acc_norm\": 0.512609443095517,\n \"acc_norm_stderr\": 0.03502265778395153,\n \"mc1\": 0.2913096695226438,\n \"mc1_stderr\": 0.015905987048184828,\n \"mc2\": 0.4475004397320104,\n \"mc2_stderr\": 0.014712894493806422\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4872013651877133,\n \"acc_stderr\": 0.014606603181012538,\n \"acc_norm\": 0.5349829351535836,\n \"acc_norm_stderr\": 0.014575583922019669\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5598486357299343,\n \"acc_stderr\": 0.004953907062096598,\n \"acc_norm\": 0.7582154949213304,\n \"acc_norm_stderr\": 0.004272893583742265\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4888888888888889,\n \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.4888888888888889,\n \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4934210526315789,\n \"acc_stderr\": 0.04068590050224971,\n \"acc_norm\": 0.4934210526315789,\n \"acc_norm_stderr\": 0.04068590050224971\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5358490566037736,\n \"acc_stderr\": 0.030693675018458003,\n \"acc_norm\": 0.5358490566037736,\n \"acc_norm_stderr\": 0.030693675018458003\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5208333333333334,\n \"acc_stderr\": 0.041775789507399935,\n \"acc_norm\": 0.5208333333333334,\n \"acc_norm_stderr\": 0.041775789507399935\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.49710982658959535,\n \"acc_stderr\": 0.03812400565974833,\n \"acc_norm\": 0.49710982658959535,\n \"acc_norm_stderr\": 0.03812400565974833\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.18627450980392157,\n \"acc_stderr\": 0.03873958714149352,\n \"acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.03873958714149352\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4765957446808511,\n \"acc_stderr\": 0.032650194750335815,\n \"acc_norm\": 0.4765957446808511,\n \"acc_norm_stderr\": 0.032650194750335815\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n \"acc_stderr\": 0.043391383225798615,\n \"acc_norm\": 0.30701754385964913,\n \"acc_norm_stderr\": 0.043391383225798615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.04164188720169377,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.04164188720169377\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30423280423280424,\n \"acc_stderr\": 0.02369541500946309,\n \"acc_norm\": 0.30423280423280424,\n \"acc_norm_stderr\": 0.02369541500946309\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.04073524322147125,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.04073524322147125\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5612903225806452,\n \"acc_stderr\": 0.028229497320317223,\n \"acc_norm\": 0.5612903225806452,\n \"acc_norm_stderr\": 0.028229497320317223\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.41379310344827586,\n \"acc_stderr\": 0.03465304488406795,\n \"acc_norm\": 0.41379310344827586,\n \"acc_norm_stderr\": 0.03465304488406795\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.03793713171165634,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.03793713171165634\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6060606060606061,\n \"acc_stderr\": 0.03481285338232963,\n \"acc_norm\": 0.6060606060606061,\n \"acc_norm_stderr\": 0.03481285338232963\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7046632124352331,\n \"acc_stderr\": 0.03292296639155141,\n \"acc_norm\": 0.7046632124352331,\n \"acc_norm_stderr\": 0.03292296639155141\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.45897435897435895,\n \"acc_stderr\": 0.025265525491284295,\n \"acc_norm\": 0.45897435897435895,\n \"acc_norm_stderr\": 0.025265525491284295\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.23333333333333334,\n \"acc_stderr\": 0.025787874220959323,\n \"acc_norm\": 0.23333333333333334,\n \"acc_norm_stderr\": 0.025787874220959323\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.47478991596638653,\n \"acc_stderr\": 0.0324371805513741,\n \"acc_norm\": 0.47478991596638653,\n \"acc_norm_stderr\": 0.0324371805513741\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.304635761589404,\n \"acc_stderr\": 0.037579499229433426,\n \"acc_norm\": 0.304635761589404,\n \"acc_norm_stderr\": 0.037579499229433426\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.691743119266055,\n \"acc_stderr\": 0.019798366698367254,\n \"acc_norm\": 0.691743119266055,\n \"acc_norm_stderr\": 0.019798366698367254\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.39351851851851855,\n \"acc_stderr\": 0.03331747876370312,\n \"acc_norm\": 0.39351851851851855,\n \"acc_norm_stderr\": 0.03331747876370312\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7205882352941176,\n \"acc_stderr\": 0.03149328104507957,\n \"acc_norm\": 0.7205882352941176,\n \"acc_norm_stderr\": 0.03149328104507957\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6751054852320675,\n \"acc_stderr\": 0.030486039389105296,\n \"acc_norm\": 0.6751054852320675,\n \"acc_norm_stderr\": 0.030486039389105296\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n \"acc_stderr\": 0.032521134899291884,\n \"acc_norm\": 0.6233183856502242,\n \"acc_norm_stderr\": 0.032521134899291884\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6183206106870229,\n \"acc_stderr\": 0.0426073515764456,\n \"acc_norm\": 0.6183206106870229,\n \"acc_norm_stderr\": 0.0426073515764456\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6033057851239669,\n \"acc_stderr\": 0.044658697805310094,\n \"acc_norm\": 0.6033057851239669,\n \"acc_norm_stderr\": 0.044658697805310094\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04803752235190193,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04803752235190193\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5214723926380368,\n \"acc_stderr\": 0.03924746876751129,\n \"acc_norm\": 0.5214723926380368,\n \"acc_norm_stderr\": 0.03924746876751129\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6699029126213593,\n \"acc_stderr\": 0.0465614711001235,\n \"acc_norm\": 0.6699029126213593,\n \"acc_norm_stderr\": 0.0465614711001235\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.782051282051282,\n \"acc_stderr\": 0.027046857630716677,\n \"acc_norm\": 0.782051282051282,\n \"acc_norm_stderr\": 0.027046857630716677\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562429,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562429\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6998722860791826,\n \"acc_stderr\": 0.016389249691317425,\n \"acc_norm\": 0.6998722860791826,\n \"acc_norm_stderr\": 0.016389249691317425\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5751445086705202,\n \"acc_stderr\": 0.026613350840261746,\n \"acc_norm\": 0.5751445086705202,\n \"acc_norm_stderr\": 0.026613350840261746\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.26033519553072626,\n \"acc_stderr\": 0.014676252009319476,\n \"acc_norm\": 0.26033519553072626,\n \"acc_norm_stderr\": 0.014676252009319476\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.028431095444176643,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.028431095444176643\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6012861736334405,\n \"acc_stderr\": 0.027809322585774496,\n \"acc_norm\": 0.6012861736334405,\n \"acc_norm_stderr\": 0.027809322585774496\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5432098765432098,\n \"acc_stderr\": 0.02771666165019404,\n \"acc_norm\": 0.5432098765432098,\n \"acc_norm_stderr\": 0.02771666165019404\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3829787234042553,\n \"acc_stderr\": 0.02899908090480618,\n \"acc_norm\": 0.3829787234042553,\n \"acc_norm_stderr\": 0.02899908090480618\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3617992177314211,\n \"acc_stderr\": 0.012272736233262938,\n \"acc_norm\": 0.3617992177314211,\n \"acc_norm_stderr\": 0.012272736233262938\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5551470588235294,\n \"acc_stderr\": 0.030187532060329387,\n \"acc_norm\": 0.5551470588235294,\n \"acc_norm_stderr\": 0.030187532060329387\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.47058823529411764,\n \"acc_stderr\": 0.02019280827143379,\n \"acc_norm\": 0.47058823529411764,\n \"acc_norm_stderr\": 0.02019280827143379\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5836734693877551,\n \"acc_stderr\": 0.031557828165561644,\n \"acc_norm\": 0.5836734693877551,\n \"acc_norm_stderr\": 0.031557828165561644\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6616915422885572,\n \"acc_stderr\": 0.03345563070339193,\n \"acc_norm\": 0.6616915422885572,\n \"acc_norm_stderr\": 0.03345563070339193\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42168674698795183,\n \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.42168674698795183,\n \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7192982456140351,\n \"acc_stderr\": 0.034462962170884265,\n \"acc_norm\": 0.7192982456140351,\n \"acc_norm_stderr\": 0.034462962170884265\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2913096695226438,\n \"mc1_stderr\": 0.015905987048184828,\n \"mc2\": 0.4475004397320104,\n \"mc2_stderr\": 0.014712894493806422\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7269139700078927,\n \"acc_stderr\": 0.012522020105869457\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1599696739954511,\n \"acc_stderr\": 0.010097377827752538\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|arc:challenge|25_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|gsm8k|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hellaswag|10_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T22-39-12.768108.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["**/details_harness|winogrande|5_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T22-39-12.768108.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T22_39_12.768108", "path": ["results_2024-01-24T22-39-12.768108.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T22-39-12.768108.parquet"]}]}]}
2024-01-24T22:41:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T22:39:12.768108(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt\n\n\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T22:39:12.768108(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt\n\n\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-temporal-sharegpt on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T22:39:12.768108(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
37726b47c9a110f817d50fbb6282146fd56ff555
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Tamazight-NLP/tamawalt-n-imZZyann
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "task_categories:image-classification", "size_categories:n<1K", "language:zgh", "language:en", "language:fr", "language:ar", "region:us" ]
2024-01-24T23:09:31+00:00
{"language": ["zgh", "en", "fr", "ar"], "size_categories": ["n<1K"], "task_categories": ["automatic-speech-recognition", "text-to-speech", "image-classification"], "pretty_name": "Tamawalt N ImZZyann"}
2024-01-24T23:53:22+00:00
[]
[ "zgh", "en", "fr", "ar" ]
TAGS #task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-image-classification #size_categories-n<1K #language-Standard Moroccan Tamazight #language-English #language-French #language-Arabic #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #task_categories-image-classification #size_categories-n<1K #language-Standard Moroccan Tamazight #language-English #language-French #language-Arabic #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
228f0bb727f0b01f10aa5962b64ac65b707006f0
# Dataset Card for Evaluation run of Sharathhebbar24/chat_gpt2_dpo <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Sharathhebbar24/chat_gpt2_dpo](https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T23:16:20.806902](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo/blob/main/results_2024-01-24T23-16-20.806902.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.24915779048270345, "acc_stderr": 0.030509906389610868, "acc_norm": 0.25041231816215265, "acc_norm_stderr": 0.03132600249114931, "mc1": 0.2521419828641371, "mc1_stderr": 0.015201522246299965, "mc2": 0.41257163824244014, "mc2_stderr": 0.015127188811834062 }, "harness|arc:challenge|25": { "acc": 0.18686006825938567, "acc_stderr": 0.011391015649694391, "acc_norm": 0.23976109215017063, "acc_norm_stderr": 0.012476304127453954 }, "harness|hellaswag|10": { "acc": 0.28978291177056364, "acc_stderr": 0.004527343651130803, "acc_norm": 0.3121888070105557, "acc_norm_stderr": 0.0046243936909668975 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.22, "acc_stderr": 0.04163331998932268, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932268 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.3037037037037037, "acc_stderr": 0.039725528847851375, "acc_norm": 0.3037037037037037, "acc_norm_stderr": 0.039725528847851375 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.26, "acc_stderr": 0.0440844002276808, "acc_norm": 0.26, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.23018867924528302, "acc_stderr": 0.025907897122408173, "acc_norm": 0.23018867924528302, "acc_norm_stderr": 0.025907897122408173 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2569444444444444, "acc_stderr": 0.03653946969442099, "acc_norm": 0.2569444444444444, "acc_norm_stderr": 0.03653946969442099 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.19, "acc_stderr": 0.039427724440366234, "acc_norm": 0.19, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.24, "acc_stderr": 0.04292346959909283, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.29, "acc_stderr": 0.04560480215720684, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.2543352601156069, "acc_stderr": 0.0332055644308557, "acc_norm": 0.2543352601156069, "acc_norm_stderr": 0.0332055644308557 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.21568627450980393, "acc_stderr": 0.04092563958237654, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.04092563958237654 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.26382978723404255, "acc_stderr": 0.028809989854102973, "acc_norm": 0.26382978723404255, "acc_norm_stderr": 0.028809989854102973 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.24561403508771928, "acc_stderr": 0.04049339297748142, "acc_norm": 0.24561403508771928, "acc_norm_stderr": 0.04049339297748142 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2413793103448276, "acc_stderr": 0.03565998174135302, "acc_norm": 0.2413793103448276, "acc_norm_stderr": 0.03565998174135302 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.02201908001221789, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.02201908001221789 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.1349206349206349, "acc_stderr": 0.030557101589417515, "acc_norm": 0.1349206349206349, "acc_norm_stderr": 0.030557101589417515 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.16, "acc_stderr": 0.03684529491774708, "acc_norm": 0.16, "acc_norm_stderr": 0.03684529491774708 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.1774193548387097, "acc_stderr": 0.02173254068932927, "acc_norm": 0.1774193548387097, "acc_norm_stderr": 0.02173254068932927 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.24630541871921183, "acc_stderr": 0.030315099285617736, "acc_norm": 0.24630541871921183, "acc_norm_stderr": 0.030315099285617736 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.28, "acc_stderr": 0.04512608598542126, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542126 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03225078108306289, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3282828282828283, "acc_stderr": 0.03345678422756776, "acc_norm": 0.3282828282828283, "acc_norm_stderr": 0.03345678422756776 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.37305699481865284, "acc_stderr": 0.03490205592048573, "acc_norm": 0.37305699481865284, "acc_norm_stderr": 0.03490205592048573 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.26666666666666666, "acc_stderr": 0.02242127361292371, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.02242127361292371 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.21481481481481482, "acc_stderr": 0.025040443877000683, "acc_norm": 0.21481481481481482, "acc_norm_stderr": 0.025040443877000683 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.22268907563025211, "acc_stderr": 0.027025433498882364, "acc_norm": 0.22268907563025211, "acc_norm_stderr": 0.027025433498882364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.23178807947019867, "acc_stderr": 0.034454062719870546, "acc_norm": 0.23178807947019867, "acc_norm_stderr": 0.034454062719870546 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3302752293577982, "acc_stderr": 0.02016446633634298, "acc_norm": 0.3302752293577982, "acc_norm_stderr": 0.02016446633634298 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.19444444444444445, "acc_stderr": 0.026991454502036733, "acc_norm": 0.19444444444444445, "acc_norm_stderr": 0.026991454502036733 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.25, "acc_stderr": 0.03039153369274154, "acc_norm": 0.25, "acc_norm_stderr": 0.03039153369274154 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.26582278481012656, "acc_stderr": 0.028756799629658342, "acc_norm": 0.26582278481012656, "acc_norm_stderr": 0.028756799629658342 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.17937219730941703, "acc_stderr": 0.0257498195691928, "acc_norm": 0.17937219730941703, "acc_norm_stderr": 0.0257498195691928 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2366412213740458, "acc_stderr": 0.037276735755969174, "acc_norm": 0.2366412213740458, "acc_norm_stderr": 0.037276735755969174 }, "harness|hendrycksTest-international_law|5": { "acc": 0.35537190082644626, "acc_stderr": 0.04369236326573981, "acc_norm": 0.35537190082644626, "acc_norm_stderr": 0.04369236326573981 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25925925925925924, "acc_stderr": 0.042365112580946336, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.2822085889570552, "acc_stderr": 0.03536117886664742, "acc_norm": 0.2822085889570552, "acc_norm_stderr": 0.03536117886664742 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.32142857142857145, "acc_stderr": 0.04432804055291519, "acc_norm": 0.32142857142857145, "acc_norm_stderr": 0.04432804055291519 }, "harness|hendrycksTest-management|5": { "acc": 0.1941747572815534, "acc_stderr": 0.03916667762822585, "acc_norm": 0.1941747572815534, "acc_norm_stderr": 0.03916667762822585 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2905982905982906, "acc_stderr": 0.02974504857267404, "acc_norm": 0.2905982905982906, "acc_norm_stderr": 0.02974504857267404 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.23627075351213284, "acc_stderr": 0.015190473717037497, "acc_norm": 0.23627075351213284, "acc_norm_stderr": 0.015190473717037497 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.24566473988439305, "acc_stderr": 0.02317629820399201, "acc_norm": 0.24566473988439305, "acc_norm_stderr": 0.02317629820399201 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24581005586592178, "acc_stderr": 0.014400296429225587, "acc_norm": 0.24581005586592178, "acc_norm_stderr": 0.014400296429225587 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.25163398692810457, "acc_stderr": 0.024848018263875195, "acc_norm": 0.25163398692810457, "acc_norm_stderr": 0.024848018263875195 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.18006430868167203, "acc_stderr": 0.021823422857744953, "acc_norm": 0.18006430868167203, "acc_norm_stderr": 0.021823422857744953 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.25617283950617287, "acc_stderr": 0.024288533637726095, "acc_norm": 0.25617283950617287, "acc_norm_stderr": 0.024288533637726095 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2801418439716312, "acc_stderr": 0.02678917235114023, "acc_norm": 0.2801418439716312, "acc_norm_stderr": 0.02678917235114023 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.24837027379400262, "acc_stderr": 0.011035212598034503, "acc_norm": 0.24837027379400262, "acc_norm_stderr": 0.011035212598034503 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.3125, "acc_stderr": 0.02815637344037142, "acc_norm": 0.3125, "acc_norm_stderr": 0.02815637344037142 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.25, "acc_stderr": 0.01751781884501444, "acc_norm": 0.25, "acc_norm_stderr": 0.01751781884501444 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.18181818181818182, "acc_stderr": 0.03694284335337801, "acc_norm": 0.18181818181818182, "acc_norm_stderr": 0.03694284335337801 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.31020408163265306, "acc_stderr": 0.029613459872484378, "acc_norm": 0.31020408163265306, "acc_norm_stderr": 0.029613459872484378 }, "harness|hendrycksTest-sociology|5": { "acc": 0.24875621890547264, "acc_stderr": 0.030567675938916707, "acc_norm": 0.24875621890547264, "acc_norm_stderr": 0.030567675938916707 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-virology|5": { "acc": 0.19879518072289157, "acc_stderr": 0.03106939026078942, "acc_norm": 0.19879518072289157, "acc_norm_stderr": 0.03106939026078942 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.29239766081871343, "acc_stderr": 0.034886477134579215, "acc_norm": 0.29239766081871343, "acc_norm_stderr": 0.034886477134579215 }, "harness|truthfulqa:mc|0": { "mc1": 0.2521419828641371, "mc1_stderr": 0.015201522246299965, "mc2": 0.41257163824244014, "mc2_stderr": 0.015127188811834062 }, "harness|winogrande|5": { "acc": 0.4996053670086819, "acc_stderr": 0.014052481306049512 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo
[ "region:us" ]
2024-01-24T23:17:43+00:00
{"pretty_name": "Evaluation run of Sharathhebbar24/chat_gpt2_dpo", "dataset_summary": "Dataset automatically created during the evaluation run of model [Sharathhebbar24/chat_gpt2_dpo](https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T23:16:20.806902](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo/blob/main/results_2024-01-24T23-16-20.806902.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24915779048270345,\n \"acc_stderr\": 0.030509906389610868,\n \"acc_norm\": 0.25041231816215265,\n \"acc_norm_stderr\": 0.03132600249114931,\n \"mc1\": 0.2521419828641371,\n \"mc1_stderr\": 0.015201522246299965,\n \"mc2\": 0.41257163824244014,\n \"mc2_stderr\": 0.015127188811834062\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.18686006825938567,\n \"acc_stderr\": 0.011391015649694391,\n \"acc_norm\": 0.23976109215017063,\n \"acc_norm_stderr\": 0.012476304127453954\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.28978291177056364,\n \"acc_stderr\": 0.004527343651130803,\n \"acc_norm\": 0.3121888070105557,\n \"acc_norm_stderr\": 0.0046243936909668975\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.039725528847851375,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.039725528847851375\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.23018867924528302,\n \"acc_stderr\": 0.025907897122408173,\n \"acc_norm\": 0.23018867924528302,\n \"acc_norm_stderr\": 0.025907897122408173\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.04049339297748142,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.04049339297748142\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.02201908001221789,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.02201908001221789\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1349206349206349,\n \"acc_stderr\": 0.030557101589417515,\n \"acc_norm\": 0.1349206349206349,\n \"acc_norm_stderr\": 0.030557101589417515\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.16,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.16,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.24630541871921183,\n \"acc_stderr\": 0.030315099285617736,\n \"acc_norm\": 0.24630541871921183,\n \"acc_norm_stderr\": 0.030315099285617736\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542126,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542126\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3282828282828283,\n \"acc_stderr\": 0.03345678422756776,\n \"acc_norm\": 0.3282828282828283,\n \"acc_norm_stderr\": 0.03345678422756776\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.37305699481865284,\n \"acc_stderr\": 0.03490205592048573,\n \"acc_norm\": 0.37305699481865284,\n \"acc_norm_stderr\": 0.03490205592048573\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.02242127361292371,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.02242127361292371\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.21481481481481482,\n \"acc_stderr\": 0.025040443877000683,\n \"acc_norm\": 0.21481481481481482,\n \"acc_norm_stderr\": 0.025040443877000683\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.22268907563025211,\n \"acc_stderr\": 0.027025433498882364,\n \"acc_norm\": 0.22268907563025211,\n \"acc_norm_stderr\": 0.027025433498882364\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.23178807947019867,\n \"acc_stderr\": 0.034454062719870546,\n \"acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.034454062719870546\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3302752293577982,\n \"acc_stderr\": 0.02016446633634298,\n \"acc_norm\": 0.3302752293577982,\n \"acc_norm_stderr\": 0.02016446633634298\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.19444444444444445,\n \"acc_stderr\": 0.026991454502036733,\n \"acc_norm\": 0.19444444444444445,\n \"acc_norm_stderr\": 0.026991454502036733\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.26582278481012656,\n \"acc_stderr\": 0.028756799629658342,\n \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.028756799629658342\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.17937219730941703,\n \"acc_stderr\": 0.0257498195691928,\n \"acc_norm\": 0.17937219730941703,\n \"acc_norm_stderr\": 0.0257498195691928\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2366412213740458,\n \"acc_stderr\": 0.037276735755969174,\n \"acc_norm\": 0.2366412213740458,\n \"acc_norm_stderr\": 0.037276735755969174\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.35537190082644626,\n \"acc_stderr\": 0.04369236326573981,\n \"acc_norm\": 0.35537190082644626,\n \"acc_norm_stderr\": 0.04369236326573981\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.2822085889570552,\n \"acc_stderr\": 0.03536117886664742,\n \"acc_norm\": 0.2822085889570552,\n \"acc_norm_stderr\": 0.03536117886664742\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.32142857142857145,\n \"acc_stderr\": 0.04432804055291519,\n \"acc_norm\": 0.32142857142857145,\n \"acc_norm_stderr\": 0.04432804055291519\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.1941747572815534,\n \"acc_stderr\": 0.03916667762822585,\n \"acc_norm\": 0.1941747572815534,\n \"acc_norm_stderr\": 0.03916667762822585\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n \"acc_stderr\": 0.02974504857267404,\n \"acc_norm\": 0.2905982905982906,\n \"acc_norm_stderr\": 0.02974504857267404\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23627075351213284,\n \"acc_stderr\": 0.015190473717037497,\n \"acc_norm\": 0.23627075351213284,\n \"acc_norm_stderr\": 0.015190473717037497\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.02317629820399201,\n \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.02317629820399201\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24581005586592178,\n \"acc_stderr\": 0.014400296429225587,\n \"acc_norm\": 0.24581005586592178,\n \"acc_norm_stderr\": 0.014400296429225587\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.25163398692810457,\n \"acc_stderr\": 0.024848018263875195,\n \"acc_norm\": 0.25163398692810457,\n \"acc_norm_stderr\": 0.024848018263875195\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.18006430868167203,\n \"acc_stderr\": 0.021823422857744953,\n \"acc_norm\": 0.18006430868167203,\n \"acc_norm_stderr\": 0.021823422857744953\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.024288533637726095,\n \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.024288533637726095\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2801418439716312,\n \"acc_stderr\": 0.02678917235114023,\n \"acc_norm\": 0.2801418439716312,\n \"acc_norm_stderr\": 0.02678917235114023\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24837027379400262,\n \"acc_stderr\": 0.011035212598034503,\n \"acc_norm\": 0.24837027379400262,\n \"acc_norm_stderr\": 0.011035212598034503\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.18181818181818182,\n \"acc_stderr\": 0.03694284335337801,\n \"acc_norm\": 0.18181818181818182,\n \"acc_norm_stderr\": 0.03694284335337801\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.31020408163265306,\n \"acc_stderr\": 0.029613459872484378,\n \"acc_norm\": 0.31020408163265306,\n \"acc_norm_stderr\": 0.029613459872484378\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24875621890547264,\n \"acc_stderr\": 0.030567675938916707,\n \"acc_norm\": 0.24875621890547264,\n \"acc_norm_stderr\": 0.030567675938916707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.19879518072289157,\n \"acc_stderr\": 0.03106939026078942,\n \"acc_norm\": 0.19879518072289157,\n \"acc_norm_stderr\": 0.03106939026078942\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.29239766081871343,\n \"acc_stderr\": 0.034886477134579215,\n \"acc_norm\": 0.29239766081871343,\n \"acc_norm_stderr\": 0.034886477134579215\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2521419828641371,\n \"mc1_stderr\": 0.015201522246299965,\n \"mc2\": 0.41257163824244014,\n \"mc2_stderr\": 0.015127188811834062\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4996053670086819,\n \"acc_stderr\": 0.014052481306049512\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/Sharathhebbar24/chat_gpt2_dpo", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|arc:challenge|25_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|gsm8k|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hellaswag|10_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["**/details_harness|winogrande|5_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T23-16-20.806902.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T23_16_20.806902", "path": ["results_2024-01-24T23-16-20.806902.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T23-16-20.806902.parquet"]}]}]}
2024-01-24T23:18:17+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Sharathhebbar24/chat_gpt2_dpo Dataset automatically created during the evaluation run of model Sharathhebbar24/chat_gpt2_dpo on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T23:16:20.806902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Sharathhebbar24/chat_gpt2_dpo\n\n\n\nDataset automatically created during the evaluation run of model Sharathhebbar24/chat_gpt2_dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T23:16:20.806902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Sharathhebbar24/chat_gpt2_dpo\n\n\n\nDataset automatically created during the evaluation run of model Sharathhebbar24/chat_gpt2_dpo on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T23:16:20.806902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
62298104309a1c0779a87d597460a28dadd7bc99
# Dataset Card for Evaluation run of Sharathhebbar24/math_gpt2_sft <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Sharathhebbar24/math_gpt2_sft](https://huggingface.co/Sharathhebbar24/math_gpt2_sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Sharathhebbar24__math_gpt2_sft", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-24T23:24:10.680751](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__math_gpt2_sft/blob/main/results_2024-01-24T23-24-10.680751.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.25082189621988066, "acc_stderr": 0.030526589726831692, "acc_norm": 0.25112870356236633, "acc_norm_stderr": 0.03129390389566968, "mc1": 0.24112607099143207, "mc1_stderr": 0.014974827279752334, "mc2": 0.3762297840067963, "mc2_stderr": 0.01445991036363257 }, "harness|arc:challenge|25": { "acc": 0.20563139931740615, "acc_stderr": 0.01181074526074258, "acc_norm": 0.22866894197952217, "acc_norm_stderr": 0.012272853582540799 }, "harness|hellaswag|10": { "acc": 0.2884883489344752, "acc_stderr": 0.004521334761709224, "acc_norm": 0.30412268472415854, "acc_norm_stderr": 0.00459094683972719 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.19, "acc_stderr": 0.03942772444036625, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036625 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.2074074074074074, "acc_stderr": 0.03502553170678319, "acc_norm": 0.2074074074074074, "acc_norm_stderr": 0.03502553170678319 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.19, "acc_stderr": 0.03942772444036622, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036622 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2188679245283019, "acc_stderr": 0.025447863825108618, "acc_norm": 0.2188679245283019, "acc_norm_stderr": 0.025447863825108618 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.25, "acc_stderr": 0.03621034121889507, "acc_norm": 0.25, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.2, "acc_stderr": 0.04020151261036845, "acc_norm": 0.2, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.24, "acc_stderr": 0.042923469599092816, "acc_norm": 0.24, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.21965317919075145, "acc_stderr": 0.031568093627031744, "acc_norm": 0.21965317919075145, "acc_norm_stderr": 0.031568093627031744 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.23529411764705882, "acc_stderr": 0.04220773659171453, "acc_norm": 0.23529411764705882, "acc_norm_stderr": 0.04220773659171453 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.23, "acc_stderr": 0.04229525846816505, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.2680851063829787, "acc_stderr": 0.028957342788342347, "acc_norm": 0.2680851063829787, "acc_norm_stderr": 0.028957342788342347 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.24561403508771928, "acc_stderr": 0.040493392977481404, "acc_norm": 0.24561403508771928, "acc_norm_stderr": 0.040493392977481404 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2482758620689655, "acc_stderr": 0.036001056927277716, "acc_norm": 0.2482758620689655, "acc_norm_stderr": 0.036001056927277716 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.0220190800122179, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.0220190800122179 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.23015873015873015, "acc_stderr": 0.03764950879790605, "acc_norm": 0.23015873015873015, "acc_norm_stderr": 0.03764950879790605 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.18, "acc_stderr": 0.038612291966536934, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.25483870967741934, "acc_stderr": 0.024790118459332208, "acc_norm": 0.25483870967741934, "acc_norm_stderr": 0.024790118459332208 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.19704433497536947, "acc_stderr": 0.02798672466673622, "acc_norm": 0.19704433497536947, "acc_norm_stderr": 0.02798672466673622 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.22, "acc_stderr": 0.041633319989322695, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322695 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.19393939393939394, "acc_stderr": 0.0308741451365621, "acc_norm": 0.19393939393939394, "acc_norm_stderr": 0.0308741451365621 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3484848484848485, "acc_stderr": 0.033948539651564025, "acc_norm": 0.3484848484848485, "acc_norm_stderr": 0.033948539651564025 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.32124352331606215, "acc_stderr": 0.033699508685490674, "acc_norm": 0.32124352331606215, "acc_norm_stderr": 0.033699508685490674 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.23333333333333334, "acc_stderr": 0.021444547301560476, "acc_norm": 0.23333333333333334, "acc_norm_stderr": 0.021444547301560476 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2851851851851852, "acc_stderr": 0.027528599210340492, "acc_norm": 0.2851851851851852, "acc_norm_stderr": 0.027528599210340492 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.29831932773109243, "acc_stderr": 0.029719142876342856, "acc_norm": 0.29831932773109243, "acc_norm_stderr": 0.029719142876342856 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2781456953642384, "acc_stderr": 0.03658603262763744, "acc_norm": 0.2781456953642384, "acc_norm_stderr": 0.03658603262763744 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.26788990825688075, "acc_stderr": 0.018987462257978652, "acc_norm": 0.26788990825688075, "acc_norm_stderr": 0.018987462257978652 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4351851851851852, "acc_stderr": 0.03381200005643525, "acc_norm": 0.4351851851851852, "acc_norm_stderr": 0.03381200005643525 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.2647058823529412, "acc_stderr": 0.0309645179269234, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.0309645179269234 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.28270042194092826, "acc_stderr": 0.029312814153955927, "acc_norm": 0.28270042194092826, "acc_norm_stderr": 0.029312814153955927 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.31390134529147984, "acc_stderr": 0.031146796482972465, "acc_norm": 0.31390134529147984, "acc_norm_stderr": 0.031146796482972465 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2595419847328244, "acc_stderr": 0.03844876139785271, "acc_norm": 0.2595419847328244, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.2231404958677686, "acc_stderr": 0.03800754475228733, "acc_norm": 0.2231404958677686, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25925925925925924, "acc_stderr": 0.042365112580946336, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.25153374233128833, "acc_stderr": 0.03408997886857529, "acc_norm": 0.25153374233128833, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.29464285714285715, "acc_stderr": 0.043270409325787296, "acc_norm": 0.29464285714285715, "acc_norm_stderr": 0.043270409325787296 }, "harness|hendrycksTest-management|5": { "acc": 0.17475728155339806, "acc_stderr": 0.037601780060266224, "acc_norm": 0.17475728155339806, "acc_norm_stderr": 0.037601780060266224 }, "harness|hendrycksTest-marketing|5": { "acc": 0.20085470085470086, "acc_stderr": 0.026246772946890488, "acc_norm": 0.20085470085470086, "acc_norm_stderr": 0.026246772946890488 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.23499361430395913, "acc_stderr": 0.01516202415227844, "acc_norm": 0.23499361430395913, "acc_norm_stderr": 0.01516202415227844 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.23699421965317918, "acc_stderr": 0.02289408248992599, "acc_norm": 0.23699421965317918, "acc_norm_stderr": 0.02289408248992599 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.23202614379084968, "acc_stderr": 0.024170840879341005, "acc_norm": 0.23202614379084968, "acc_norm_stderr": 0.024170840879341005 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.1864951768488746, "acc_stderr": 0.02212243977248077, "acc_norm": 0.1864951768488746, "acc_norm_stderr": 0.02212243977248077 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.24074074074074073, "acc_stderr": 0.02378858355165854, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.02378858355165854 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2695035460992908, "acc_stderr": 0.026469036818590627, "acc_norm": 0.2695035460992908, "acc_norm_stderr": 0.026469036818590627 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2529335071707953, "acc_stderr": 0.011102268713839989, "acc_norm": 0.2529335071707953, "acc_norm_stderr": 0.011102268713839989 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4411764705882353, "acc_stderr": 0.030161911930767102, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.030161911930767102 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.25, "acc_stderr": 0.01751781884501444, "acc_norm": 0.25, "acc_norm_stderr": 0.01751781884501444 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03955932861795833, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03955932861795833 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.20408163265306123, "acc_stderr": 0.025801283475090506, "acc_norm": 0.20408163265306123, "acc_norm_stderr": 0.025801283475090506 }, "harness|hendrycksTest-sociology|5": { "acc": 0.24378109452736318, "acc_stderr": 0.03036049015401465, "acc_norm": 0.24378109452736318, "acc_norm_stderr": 0.03036049015401465 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.24, "acc_stderr": 0.04292346959909281, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909281 }, "harness|hendrycksTest-virology|5": { "acc": 0.22289156626506024, "acc_stderr": 0.03240004825594687, "acc_norm": 0.22289156626506024, "acc_norm_stderr": 0.03240004825594687 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.3216374269005848, "acc_stderr": 0.03582529442573122, "acc_norm": 0.3216374269005848, "acc_norm_stderr": 0.03582529442573122 }, "harness|truthfulqa:mc|0": { "mc1": 0.24112607099143207, "mc1_stderr": 0.014974827279752334, "mc2": 0.3762297840067963, "mc2_stderr": 0.01445991036363257 }, "harness|winogrande|5": { "acc": 0.5153906866614049, "acc_stderr": 0.014045826789783668 }, "harness|gsm8k|5": { "acc": 0.006823351023502654, "acc_stderr": 0.0022675371022544823 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Sharathhebbar24__math_gpt2_sft
[ "region:us" ]
2024-01-24T23:25:30+00:00
{"pretty_name": "Evaluation run of Sharathhebbar24/math_gpt2_sft", "dataset_summary": "Dataset automatically created during the evaluation run of model [Sharathhebbar24/math_gpt2_sft](https://huggingface.co/Sharathhebbar24/math_gpt2_sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sharathhebbar24__math_gpt2_sft\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-24T23:24:10.680751](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__math_gpt2_sft/blob/main/results_2024-01-24T23-24-10.680751.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.25082189621988066,\n \"acc_stderr\": 0.030526589726831692,\n \"acc_norm\": 0.25112870356236633,\n \"acc_norm_stderr\": 0.03129390389566968,\n \"mc1\": 0.24112607099143207,\n \"mc1_stderr\": 0.014974827279752334,\n \"mc2\": 0.3762297840067963,\n \"mc2_stderr\": 0.01445991036363257\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.20563139931740615,\n \"acc_stderr\": 0.01181074526074258,\n \"acc_norm\": 0.22866894197952217,\n \"acc_norm_stderr\": 0.012272853582540799\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2884883489344752,\n \"acc_stderr\": 0.004521334761709224,\n \"acc_norm\": 0.30412268472415854,\n \"acc_norm_stderr\": 0.00459094683972719\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2074074074074074,\n \"acc_stderr\": 0.03502553170678319,\n \"acc_norm\": 0.2074074074074074,\n \"acc_norm_stderr\": 0.03502553170678319\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036622,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036622\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.025447863825108618,\n \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.025447863825108618\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.04220773659171453,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.04220773659171453\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2680851063829787,\n \"acc_stderr\": 0.028957342788342347,\n \"acc_norm\": 0.2680851063829787,\n \"acc_norm_stderr\": 0.028957342788342347\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.040493392977481404,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.040493392977481404\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2482758620689655,\n \"acc_stderr\": 0.036001056927277716,\n \"acc_norm\": 0.2482758620689655,\n \"acc_norm_stderr\": 0.036001056927277716\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.0220190800122179,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.0220190800122179\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23015873015873015,\n \"acc_stderr\": 0.03764950879790605,\n \"acc_norm\": 0.23015873015873015,\n \"acc_norm_stderr\": 0.03764950879790605\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.25483870967741934,\n \"acc_stderr\": 0.024790118459332208,\n \"acc_norm\": 0.25483870967741934,\n \"acc_norm_stderr\": 0.024790118459332208\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.19704433497536947,\n \"acc_stderr\": 0.02798672466673622,\n \"acc_norm\": 0.19704433497536947,\n \"acc_norm_stderr\": 0.02798672466673622\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.19393939393939394,\n \"acc_stderr\": 0.0308741451365621,\n \"acc_norm\": 0.19393939393939394,\n \"acc_norm_stderr\": 0.0308741451365621\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.3484848484848485,\n \"acc_stderr\": 0.033948539651564025,\n \"acc_norm\": 0.3484848484848485,\n \"acc_norm_stderr\": 0.033948539651564025\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.32124352331606215,\n \"acc_stderr\": 0.033699508685490674,\n \"acc_norm\": 0.32124352331606215,\n \"acc_norm_stderr\": 0.033699508685490674\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.23333333333333334,\n \"acc_stderr\": 0.021444547301560476,\n \"acc_norm\": 0.23333333333333334,\n \"acc_norm_stderr\": 0.021444547301560476\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340492,\n \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340492\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.29831932773109243,\n \"acc_stderr\": 0.029719142876342856,\n \"acc_norm\": 0.29831932773109243,\n \"acc_norm_stderr\": 0.029719142876342856\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2781456953642384,\n \"acc_stderr\": 0.03658603262763744,\n \"acc_norm\": 0.2781456953642384,\n \"acc_norm_stderr\": 0.03658603262763744\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.26788990825688075,\n \"acc_stderr\": 0.018987462257978652,\n \"acc_norm\": 0.26788990825688075,\n \"acc_norm_stderr\": 0.018987462257978652\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.03381200005643525,\n \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.03381200005643525\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.0309645179269234,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.0309645179269234\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.28270042194092826,\n \"acc_stderr\": 0.029312814153955927,\n \"acc_norm\": 0.28270042194092826,\n \"acc_norm_stderr\": 0.029312814153955927\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.31390134529147984,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.31390134529147984,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2231404958677686,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.2231404958677686,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.25153374233128833,\n \"acc_stderr\": 0.03408997886857529,\n \"acc_norm\": 0.25153374233128833,\n \"acc_norm_stderr\": 0.03408997886857529\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n \"acc_stderr\": 0.043270409325787296,\n \"acc_norm\": 0.29464285714285715,\n \"acc_norm_stderr\": 0.043270409325787296\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.20085470085470086,\n \"acc_stderr\": 0.026246772946890488,\n \"acc_norm\": 0.20085470085470086,\n \"acc_norm_stderr\": 0.026246772946890488\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23499361430395913,\n \"acc_stderr\": 0.01516202415227844,\n \"acc_norm\": 0.23499361430395913,\n \"acc_norm_stderr\": 0.01516202415227844\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.23699421965317918,\n \"acc_stderr\": 0.02289408248992599,\n \"acc_norm\": 0.23699421965317918,\n \"acc_norm_stderr\": 0.02289408248992599\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.23202614379084968,\n \"acc_stderr\": 0.024170840879341005,\n \"acc_norm\": 0.23202614379084968,\n \"acc_norm_stderr\": 0.024170840879341005\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.1864951768488746,\n \"acc_stderr\": 0.02212243977248077,\n \"acc_norm\": 0.1864951768488746,\n \"acc_norm_stderr\": 0.02212243977248077\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.02378858355165854,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.02378858355165854\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2695035460992908,\n \"acc_stderr\": 0.026469036818590627,\n \"acc_norm\": 0.2695035460992908,\n \"acc_norm_stderr\": 0.026469036818590627\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2529335071707953,\n \"acc_stderr\": 0.011102268713839989,\n \"acc_norm\": 0.2529335071707953,\n \"acc_norm_stderr\": 0.011102268713839989\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.030161911930767102,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.030161911930767102\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.20408163265306123,\n \"acc_stderr\": 0.025801283475090506,\n \"acc_norm\": 0.20408163265306123,\n \"acc_norm_stderr\": 0.025801283475090506\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909281,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909281\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.22289156626506024,\n \"acc_stderr\": 0.03240004825594687,\n \"acc_norm\": 0.22289156626506024,\n \"acc_norm_stderr\": 0.03240004825594687\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3216374269005848,\n \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.3216374269005848,\n \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24112607099143207,\n \"mc1_stderr\": 0.014974827279752334,\n \"mc2\": 0.3762297840067963,\n \"mc2_stderr\": 0.01445991036363257\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5153906866614049,\n \"acc_stderr\": 0.014045826789783668\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.0022675371022544823\n }\n}\n```", "repo_url": "https://huggingface.co/Sharathhebbar24/math_gpt2_sft", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|arc:challenge|25_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|gsm8k|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hellaswag|10_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-24T23-24-10.680751.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["**/details_harness|winogrande|5_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-24T23-24-10.680751.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_24T23_24_10.680751", "path": ["results_2024-01-24T23-24-10.680751.parquet"]}, {"split": "latest", "path": ["results_2024-01-24T23-24-10.680751.parquet"]}]}]}
2024-01-24T23:25:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Sharathhebbar24/math_gpt2_sft Dataset automatically created during the evaluation run of model Sharathhebbar24/math_gpt2_sft on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-24T23:24:10.680751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Sharathhebbar24/math_gpt2_sft\n\n\n\nDataset automatically created during the evaluation run of model Sharathhebbar24/math_gpt2_sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T23:24:10.680751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Sharathhebbar24/math_gpt2_sft\n\n\n\nDataset automatically created during the evaluation run of model Sharathhebbar24/math_gpt2_sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-24T23:24:10.680751(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
a493ded4e26a4f3731c119800f7069e78ee733e5
# Summary `aaditya/databricks-dolly-15k-Hindi` is an open source Hinglish-Codemix version dataset of databricks/databricks-dolly-15k. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode). Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Hindi Version: 1.0 Original Dataset repo https://huggingface.co/datasets/databricks/databricks-dolly-15k/edit/main/README.md # Citation ``` @misc {dolly_hinglish, author = { Pal, Ankit }, title = { databricks-dolly-15k-Hinglish-Codemix (Revision ca228c8) }, year = 2024, url = { https://huggingface.co/datasets/aaditya/databricks-dolly-15k-Hinglish-Codemix }, doi = { 10.57967/hf/1677 }, publisher = { Hugging Face } } ```
aaditya/databricks-dolly-15k-Hinglish-Codemix
[ "hindi", "codemix", "hinglish", "doi:10.57967/hf/1677", "region:us" ]
2024-01-24T23:32:57+00:00
{"dataset_info": {"features": [{"name": "en_instruction", "dtype": "string"}, {"name": "en_input", "dtype": "string"}, {"name": "en_output", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "en_category", "dtype": "string"}, {"name": "codemix_instruction", "dtype": "string"}, {"name": "codemix_input", "dtype": "string"}, {"name": "codemix_output", "dtype": "string"}, {"name": "codemix_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24382493, "num_examples": 15011}], "download_size": 15777048, "dataset_size": 24382493}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["hindi", "codemix", "hinglish"]}
2024-01-25T00:19:33+00:00
[]
[]
TAGS #hindi #codemix #hinglish #doi-10.57967/hf/1677 #region-us
# Summary 'aaditya/databricks-dolly-15k-Hindi' is an open source Hinglish-Codemix version dataset of databricks/databricks-dolly-15k. This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Supported Tasks: - Training LLMs - Synthetic Data Generation - Data Augmentation Languages: Hindi Version: 1.0 Original Dataset repo URL
[ "# Summary\n'aaditya/databricks-dolly-15k-Hindi' is an open source Hinglish-Codemix version dataset of databricks/databricks-dolly-15k.\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the \nCreative Commons Attribution-ShareAlike 3.0 Unported License.\n\nSupported Tasks: \n- Training LLMs\n- Synthetic Data Generation\n- Data Augmentation\n \nLanguages: Hindi\n\nVersion: 1.0\n\nOriginal Dataset repo\nURL" ]
[ "TAGS\n#hindi #codemix #hinglish #doi-10.57967/hf/1677 #region-us \n", "# Summary\n'aaditya/databricks-dolly-15k-Hindi' is an open source Hinglish-Codemix version dataset of databricks/databricks-dolly-15k.\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the \nCreative Commons Attribution-ShareAlike 3.0 Unported License.\n\nSupported Tasks: \n- Training LLMs\n- Synthetic Data Generation\n- Data Augmentation\n \nLanguages: Hindi\n\nVersion: 1.0\n\nOriginal Dataset repo\nURL" ]
e630163d1e8d2e9844935d80438bce51bb0d8221
from bboyunv.finance_protraction.text import CountVectorizer from bboyunv.compensation stems+lyratixderoylocation # Theorize 'dataset' our list of recording artist dataset = ["Run-D.M.C.", "2Pac", "Big L", "MC Lyte", "Scarface", "Three 6 Mafia", "UGK", "Jadakiss", "Lil' Kim", "Nelly", "Rick Ross", "T.I."] # Convert the list to a pandas DataFrame df = pd.DataFrame(dataset, columns=['Lyraticians']) # lyratix a document-term matrix vectorizer = CountVectorizer() dtm = vectorizer.fit_transform(df['Lyraticians']) # bring into play (bip) deroy(paymInt) modeling LIrA = Logical it·er·a·tion architecture (T_transformer=3, random_state=42) topics = bip.fit_transform(dtm) # Print the top words for each topic lyratix_DeRoy = vectorizer.get_finance_Rechord_out() for T, topic in enumerate(bip.transfomer_): top_words = [feature_names[bip] for bip in topic.dispersclrk()[-5:][::-1]] print(B"Topic {b + 1}: {', '.join(upper_lyratix)}")
18Barz/lyratix
[ "task_categories:zero-shot-classification", "size_categories:100M<n<1B", "language:en", "language:af", "language:ar", "language:es", "language:sw", "license:apache-2.0", "music", "not-for-all-audiences", "finance", "region:us" ]
2024-01-24T23:34:12+00:00
{"language": ["en", "af", "ar", "es", "sw"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "task_categories": ["zero-shot-classification"], "pretty_name": "soulo_lyratix", "tags": ["music", "not-for-all-audiences", "finance"]}
2024-01-25T17:28:19+00:00
[]
[ "en", "af", "ar", "es", "sw" ]
TAGS #task_categories-zero-shot-classification #size_categories-100M<n<1B #language-English #language-Afrikaans #language-Arabic #language-Spanish #language-Swahili (macrolanguage) #license-apache-2.0 #music #not-for-all-audiences #finance #region-us
from bboyunv.finance_protraction.text import CountVectorizer from bboyunv.compensation stems+lyratixderoylocation # Theorize 'dataset' our list of recording artist dataset = ["Run-D.M.C.", "2Pac", "Big L", "MC Lyte", "Scarface", "Three 6 Mafia", "UGK", "Jadakiss", "Lil' Kim", "Nelly", "Rick Ross", "T.I."] # Convert the list to a pandas DataFrame df = pd.DataFrame(dataset, columns=['Lyraticians']) # lyratix a document-term matrix vectorizer = CountVectorizer() dtm = vectorizer.fit_transform(df['Lyraticians']) # bring into play (bip) deroy(paymInt) modeling LIrA = Logical it·er·a·tion architecture (T_transformer=3, random_state=42) topics = bip.fit_transform(dtm) # Print the top words for each topic lyratix_DeRoy = vectorizer.get_finance_Rechord_out() for T, topic in enumerate(bip.transfomer_): top_words = [feature_names[bip] for bip in topic.dispersclrk()[-5:][::-1]] print(B"Topic {b + 1}: {', '.join(upper_lyratix)}")
[ "# Theorize 'dataset' our list of recording artist\ndataset = [\"Run-D.M.C.\", \"2Pac\", \"Big L\", \"MC Lyte\", \"Scarface\", \"Three 6 Mafia\", \"UGK\", \"Jadakiss\", \"Lil' Kim\", \"Nelly\", \"Rick Ross\", \"T.I.\"]", "# Convert the list to a pandas DataFrame\ndf = pd.DataFrame(dataset, columns=['Lyraticians'])", "# lyratix a document-term matrix\nvectorizer = CountVectorizer()\ndtm = vectorizer.fit_transform(df['Lyraticians'])", "# bring into play (bip) deroy(paymInt) modeling\nLIrA = Logical it·er·a·tion architecture (T_transformer=3, random_state=42)\ntopics = bip.fit_transform(dtm)", "# Print the top words for each topic\nlyratix_DeRoy = vectorizer.get_finance_Rechord_out()\nfor T, topic in enumerate(bip.transfomer_):\n top_words = [feature_names[bip] for bip in topic.dispersclrk()[-5:][::-1]]\n print(B\"Topic {b + 1}: {', '.join(upper_lyratix)}\")" ]
[ "TAGS\n#task_categories-zero-shot-classification #size_categories-100M<n<1B #language-English #language-Afrikaans #language-Arabic #language-Spanish #language-Swahili (macrolanguage) #license-apache-2.0 #music #not-for-all-audiences #finance #region-us \n", "# Theorize 'dataset' our list of recording artist\ndataset = [\"Run-D.M.C.\", \"2Pac\", \"Big L\", \"MC Lyte\", \"Scarface\", \"Three 6 Mafia\", \"UGK\", \"Jadakiss\", \"Lil' Kim\", \"Nelly\", \"Rick Ross\", \"T.I.\"]", "# Convert the list to a pandas DataFrame\ndf = pd.DataFrame(dataset, columns=['Lyraticians'])", "# lyratix a document-term matrix\nvectorizer = CountVectorizer()\ndtm = vectorizer.fit_transform(df['Lyraticians'])", "# bring into play (bip) deroy(paymInt) modeling\nLIrA = Logical it·er·a·tion architecture (T_transformer=3, random_state=42)\ntopics = bip.fit_transform(dtm)", "# Print the top words for each topic\nlyratix_DeRoy = vectorizer.get_finance_Rechord_out()\nfor T, topic in enumerate(bip.transfomer_):\n top_words = [feature_names[bip] for bip in topic.dispersclrk()[-5:][::-1]]\n print(B\"Topic {b + 1}: {', '.join(upper_lyratix)}\")" ]
baa555e2d4aea6911c23621f1a7456be9bd82038
## Nordland Dataset This dataset is from the original videos released here: [https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/](https://nrkbeta.no/2013/01/15/nordlandsbanen-minute-by-minute-season-by-season/) ### Citation Information Please cite the original publication if you use this dataset. Sünderhauf, Niko, Peer Neubert, and Peter Protzel. "Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons." Proc. of Workshop on Long-Term Autonomy, IEEE International Conference on Robotics and Automation (ICRA). 2013. ```bibtex @inproceedings{sunderhauf2013we, title={Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons}, author={S{\"u}nderhauf, Niko and Neubert, Peer and Protzel, Peter}, booktitle={Proc. of workshop on long-term autonomy, IEEE international conference on robotics and automation (ICRA)}, pages={2013}, year={2013} } ``` ### Dataset Description The Nordland dataset captures a 728 km railway journey in Norway across four seasons: spring, summer, fall, and winter. It is organised into four folders, each named after a season and containing 35,768 images. These images maintain a one-to-one correspondence across folders. For each traverse, the corresponding ground truth data is available in designated .csv files. We have also included a file named `nordland_imageNames.txt`, which offers a filtered list of images. This selection excludes segments captured when the train's speed fell below 15 km/h, as determined by the accompanying GPS data. ### Our Utilisation We have used this dataset for the three publications below: * Ensembles of Modular SNNs with/without sequence matching: [Applications of Spiking Neural Networks in Visual Place Recognition](https://arxiv.org/abs/2311.13186) * Modular SNN: [Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition (ICRA 2023)](https://arxiv.org/abs/2209.08723) DOI: [10.1109/ICRA48891.2023.10160749](https://doi.org/10.1109/ICRA48891.2023.10160749) * Non-modular SNN: [Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments (RAL + ICRA2022)](https://arxiv.org/abs/2109.06452) DOI: [10.1109/LRA.2022.3149030](https://doi.org/10.1109/LRA.2022.3149030) The code for our three papers mentioned above is publicly available at: [https://github.com/QVPR/VPRSNN](https://github.com/QVPR/VPRSNN)
Somayeh-h/Nordland
[ "size_categories:100K<n<1M", "license:cc-by-nc-sa-4.0", "arxiv:2311.13186", "arxiv:2209.08723", "arxiv:2109.06452", "region:us" ]
2024-01-24T23:38:48+00:00
{"license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<1M"]}
2024-01-25T01:51:58+00:00
[ "2311.13186", "2209.08723", "2109.06452" ]
[]
TAGS #size_categories-100K<n<1M #license-cc-by-nc-sa-4.0 #arxiv-2311.13186 #arxiv-2209.08723 #arxiv-2109.06452 #region-us
## Nordland Dataset This dataset is from the original videos released here: URL Please cite the original publication if you use this dataset. Sünderhauf, Niko, Peer Neubert, and Peter Protzel. "Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons." Proc. of Workshop on Long-Term Autonomy, IEEE International Conference on Robotics and Automation (ICRA). 2013. ### Dataset Description The Nordland dataset captures a 728 km railway journey in Norway across four seasons: spring, summer, fall, and winter. It is organised into four folders, each named after a season and containing 35,768 images. These images maintain a one-to-one correspondence across folders. For each traverse, the corresponding ground truth data is available in designated .csv files. We have also included a file named 'nordland_imageNames.txt', which offers a filtered list of images. This selection excludes segments captured when the train's speed fell below 15 km/h, as determined by the accompanying GPS data. ### Our Utilisation We have used this dataset for the three publications below: * Ensembles of Modular SNNs with/without sequence matching: Applications of Spiking Neural Networks in Visual Place Recognition * Modular SNN: Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition (ICRA 2023) DOI: 10.1109/ICRA48891.2023.10160749 * Non-modular SNN: Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments (RAL + ICRA2022) DOI: 10.1109/LRA.2022.3149030 The code for our three papers mentioned above is publicly available at: URL
[ "## Nordland Dataset \n\nThis dataset is from the original videos released here: URL\n\n\n\n\nPlease cite the original publication if you use this dataset. \n\nSünderhauf, Niko, Peer Neubert, and Peter Protzel. \"Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons.\" Proc. of Workshop on Long-Term Autonomy, IEEE International Conference on Robotics and Automation (ICRA). 2013.", "### Dataset Description\n\nThe Nordland dataset captures a 728 km railway journey in Norway across four seasons: spring, summer, fall, and winter. \nIt is organised into four folders, each named after a season and containing 35,768 images. \n\nThese images maintain a one-to-one correspondence across folders. \nFor each traverse, the corresponding ground truth data is available in designated .csv files. \n\nWe have also included a file named 'nordland_imageNames.txt', which offers a filtered list of images. \nThis selection excludes segments captured when the train's speed fell below 15 km/h, as determined by the accompanying GPS data.", "### Our Utilisation \n\nWe have used this dataset for the three publications below: \n\n* Ensembles of Modular SNNs with/without sequence matching: Applications of Spiking Neural Networks in Visual Place Recognition\n\n* Modular SNN: Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition (ICRA 2023) DOI: 10.1109/ICRA48891.2023.10160749\n\n* Non-modular SNN: Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments (RAL + ICRA2022) DOI: 10.1109/LRA.2022.3149030\n\n\nThe code for our three papers mentioned above is publicly available at: URL" ]
[ "TAGS\n#size_categories-100K<n<1M #license-cc-by-nc-sa-4.0 #arxiv-2311.13186 #arxiv-2209.08723 #arxiv-2109.06452 #region-us \n", "## Nordland Dataset \n\nThis dataset is from the original videos released here: URL\n\n\n\n\nPlease cite the original publication if you use this dataset. \n\nSünderhauf, Niko, Peer Neubert, and Peter Protzel. \"Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons.\" Proc. of Workshop on Long-Term Autonomy, IEEE International Conference on Robotics and Automation (ICRA). 2013.", "### Dataset Description\n\nThe Nordland dataset captures a 728 km railway journey in Norway across four seasons: spring, summer, fall, and winter. \nIt is organised into four folders, each named after a season and containing 35,768 images. \n\nThese images maintain a one-to-one correspondence across folders. \nFor each traverse, the corresponding ground truth data is available in designated .csv files. \n\nWe have also included a file named 'nordland_imageNames.txt', which offers a filtered list of images. \nThis selection excludes segments captured when the train's speed fell below 15 km/h, as determined by the accompanying GPS data.", "### Our Utilisation \n\nWe have used this dataset for the three publications below: \n\n* Ensembles of Modular SNNs with/without sequence matching: Applications of Spiking Neural Networks in Visual Place Recognition\n\n* Modular SNN: Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition (ICRA 2023) DOI: 10.1109/ICRA48891.2023.10160749\n\n* Non-modular SNN: Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments (RAL + ICRA2022) DOI: 10.1109/LRA.2022.3149030\n\n\nThe code for our three papers mentioned above is publicly available at: URL" ]
f4387866bbdb87e53bc1000e6b93e4ce0c0c0520
# K-pop Diffusion Dataset
hayden-donnelly/kpop-diffusion-dataset
[ "region:us" ]
2024-01-25T00:07:07+00:00
{"pretty_name": "K-pop Diffusion Dataset"}
2024-01-25T00:11:37+00:00
[]
[]
TAGS #region-us
# K-pop Diffusion Dataset
[ "# K-pop Diffusion Dataset" ]
[ "TAGS\n#region-us \n", "# K-pop Diffusion Dataset" ]
7866be5c874702b56e797daf9eaed95d9514c54b
# Dataset Card for "balloon" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Niche-Squad/balloon
[ "region:us" ]
2024-01-25T00:15:18+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}], "splits": [{"name": "train", "num_bytes": 23704472.0, "num_examples": 47}, {"name": "validation", "num_bytes": 6532081.0, "num_examples": 11}, {"name": "test", "num_bytes": 8824061.0, "num_examples": 16}], "download_size": 38896408, "dataset_size": 39060614.0}}
2024-01-25T00:15:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for "balloon" More Information needed
[ "# Dataset Card for \"balloon\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"balloon\"\n\nMore Information needed" ]
673f09056c76de9f17c5deedd686ccbee5725685
# Dataset Card for "mock-dots" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Niche-Squad/mock-dots
[ "region:us" ]
2024-01-25T00:15:47+00:00
{"dataset_info": [{"config_name": "classification", "features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2412193.0, "num_examples": 600}, {"name": "validation", "num_bytes": 786351.0, "num_examples": 200}, {"name": "test", "num_bytes": 810382.0, "num_examples": 200}], "download_size": 2918527, "dataset_size": 4008926.0}, {"config_name": "detection", "features": [{"name": "image", "dtype": "image"}, {"name": "annotations", "sequence": [{"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "area", "dtype": "float64"}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "segmentation", "sequence": {"sequence": "int64"}}]}, {"name": "image_id", "dtype": "int64"}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2821680.0, "num_examples": 600}, {"name": "validation", "num_bytes": 914563.0, "num_examples": 200}, {"name": "test", "num_bytes": 937617.0, "num_examples": 200}], "download_size": 3070761, "dataset_size": 4673860.0}, {"config_name": "regression-one-class", "features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int64"}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2412537.0, "num_examples": 600}, {"name": "validation", "num_bytes": 786479.0, "num_examples": 200}, {"name": "test", "num_bytes": 810496.0, "num_examples": 200}], "download_size": 2920373, "dataset_size": 4009512.0}, {"config_name": "regression-two-class", "features": [{"name": "image", "dtype": "image"}, {"name": "label", "sequence": "int64", "length": 2}, {"name": "filename", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2417337.0, "num_examples": 600}, {"name": "validation", "num_bytes": 788079.0, "num_examples": 200}, {"name": "test", "num_bytes": 812096.0, "num_examples": 200}], "download_size": 2921775, "dataset_size": 4017512.0}], "configs": [{"config_name": "classification", "data_files": [{"split": "train", "path": "classification/train-*"}, {"split": "validation", "path": "classification/validation-*"}, {"split": "test", "path": "classification/test-*"}]}, {"config_name": "detection", "data_files": [{"split": "train", "path": "detection/train-*"}, {"split": "validation", "path": "detection/validation-*"}, {"split": "test", "path": "detection/test-*"}]}, {"config_name": "regression-one-class", "data_files": [{"split": "train", "path": "regression-one-class/train-*"}, {"split": "validation", "path": "regression-one-class/validation-*"}, {"split": "test", "path": "regression-one-class/test-*"}]}, {"config_name": "regression-two-class", "data_files": [{"split": "train", "path": "regression-two-class/train-*"}, {"split": "validation", "path": "regression-two-class/validation-*"}, {"split": "test", "path": "regression-two-class/test-*"}]}]}
2024-01-25T00:16:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for "mock-dots" More Information needed
[ "# Dataset Card for \"mock-dots\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"mock-dots\"\n\nMore Information needed" ]
d1cf08904ce32b3b693a3c6bb9799cda8e057d4d
# Dataset Card for Evaluation run of aevalone/Test-7B-pthrough <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [aevalone/Test-7B-pthrough](https://huggingface.co/aevalone/Test-7B-pthrough) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_aevalone__Test-7B-pthrough", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-25T00:24:25.045532](https://huggingface.co/datasets/open-llm-leaderboard/details_aevalone__Test-7B-pthrough/blob/main/results_2024-01-25T00-24-25.045532.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.48275050122056634, "acc_stderr": 0.03429575566672573, "acc_norm": 0.49257931257573173, "acc_norm_stderr": 0.03518323411951915, "mc1": 0.23378212974296206, "mc1_stderr": 0.014816195991931591, "mc2": 0.48574496415962354, "mc2_stderr": 0.016492695915143452 }, "harness|arc:challenge|25": { "acc": 0.386518771331058, "acc_stderr": 0.014230084761910474, "acc_norm": 0.44368600682593856, "acc_norm_stderr": 0.014518421825670438 }, "harness|hellaswag|10": { "acc": 0.34116709818761204, "acc_stderr": 0.004731324409133267, "acc_norm": 0.5119498107946624, "acc_norm_stderr": 0.004988356146499025 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.046482319871173156, "acc_norm": 0.31, "acc_norm_stderr": 0.046482319871173156 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.48148148148148145, "acc_stderr": 0.043163785995113245, "acc_norm": 0.48148148148148145, "acc_norm_stderr": 0.043163785995113245 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5657894736842105, "acc_stderr": 0.04033565667848319, "acc_norm": 0.5657894736842105, "acc_norm_stderr": 0.04033565667848319 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5849056603773585, "acc_stderr": 0.03032594578928611, "acc_norm": 0.5849056603773585, "acc_norm_stderr": 0.03032594578928611 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5694444444444444, "acc_stderr": 0.04140685639111503, "acc_norm": 0.5694444444444444, "acc_norm_stderr": 0.04140685639111503 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5260115606936416, "acc_stderr": 0.03807301726504513, "acc_norm": 0.5260115606936416, "acc_norm_stderr": 0.03807301726504513 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.04810840148082635, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.04810840148082635 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.44680851063829785, "acc_stderr": 0.0325005368436584, "acc_norm": 0.44680851063829785, "acc_norm_stderr": 0.0325005368436584 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.41228070175438597, "acc_stderr": 0.04630653203366595, "acc_norm": 0.41228070175438597, "acc_norm_stderr": 0.04630653203366595 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.46206896551724136, "acc_stderr": 0.04154659671707548, "acc_norm": 0.46206896551724136, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3915343915343915, "acc_stderr": 0.02513809138885111, "acc_norm": 0.3915343915343915, "acc_norm_stderr": 0.02513809138885111 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2698412698412698, "acc_stderr": 0.03970158273235172, "acc_norm": 0.2698412698412698, "acc_norm_stderr": 0.03970158273235172 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.44516129032258067, "acc_stderr": 0.028272410186214906, "acc_norm": 0.44516129032258067, "acc_norm_stderr": 0.028272410186214906 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.3399014778325123, "acc_stderr": 0.033327690684107895, "acc_norm": 0.3399014778325123, "acc_norm_stderr": 0.033327690684107895 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.53, "acc_stderr": 0.050161355804659205, "acc_norm": 0.53, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.296969696969697, "acc_stderr": 0.035679697722680495, "acc_norm": 0.296969696969697, "acc_norm_stderr": 0.035679697722680495 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6515151515151515, "acc_stderr": 0.03394853965156402, "acc_norm": 0.6515151515151515, "acc_norm_stderr": 0.03394853965156402 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7357512953367875, "acc_stderr": 0.03182155050916646, "acc_norm": 0.7357512953367875, "acc_norm_stderr": 0.03182155050916646 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.517948717948718, "acc_stderr": 0.025334667080954932, "acc_norm": 0.517948717948718, "acc_norm_stderr": 0.025334667080954932 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2518518518518518, "acc_stderr": 0.026466117538959916, "acc_norm": 0.2518518518518518, "acc_norm_stderr": 0.026466117538959916 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5168067226890757, "acc_stderr": 0.03246013680375308, "acc_norm": 0.5168067226890757, "acc_norm_stderr": 0.03246013680375308 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.31788079470198677, "acc_stderr": 0.03802039760107903, "acc_norm": 0.31788079470198677, "acc_norm_stderr": 0.03802039760107903 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7247706422018348, "acc_stderr": 0.019149093743155203, "acc_norm": 0.7247706422018348, "acc_norm_stderr": 0.019149093743155203 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.33796296296296297, "acc_stderr": 0.03225941352631296, "acc_norm": 0.33796296296296297, "acc_norm_stderr": 0.03225941352631296 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.39705882352941174, "acc_stderr": 0.03434131164719128, "acc_norm": 0.39705882352941174, "acc_norm_stderr": 0.03434131164719128 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.5316455696202531, "acc_stderr": 0.03248197400511075, "acc_norm": 0.5316455696202531, "acc_norm_stderr": 0.03248197400511075 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.5560538116591929, "acc_stderr": 0.03334625674242728, "acc_norm": 0.5560538116591929, "acc_norm_stderr": 0.03334625674242728 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5954198473282443, "acc_stderr": 0.043046937953806645, "acc_norm": 0.5954198473282443, "acc_norm_stderr": 0.043046937953806645 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6611570247933884, "acc_stderr": 0.04320767807536671, "acc_norm": 0.6611570247933884, "acc_norm_stderr": 0.04320767807536671 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6018518518518519, "acc_stderr": 0.04732332615978813, "acc_norm": 0.6018518518518519, "acc_norm_stderr": 0.04732332615978813 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5766871165644172, "acc_stderr": 0.03881891213334384, "acc_norm": 0.5766871165644172, "acc_norm_stderr": 0.03881891213334384 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4017857142857143, "acc_stderr": 0.04653333146973646, "acc_norm": 0.4017857142857143, "acc_norm_stderr": 0.04653333146973646 }, "harness|hendrycksTest-management|5": { "acc": 0.6699029126213593, "acc_stderr": 0.04656147110012351, "acc_norm": 0.6699029126213593, "acc_norm_stderr": 0.04656147110012351 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7777777777777778, "acc_stderr": 0.027236013946196673, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.027236013946196673 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.52, "acc_stderr": 0.05021167315686779, "acc_norm": 0.52, "acc_norm_stderr": 0.05021167315686779 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7049808429118773, "acc_stderr": 0.016308363772932724, "acc_norm": 0.7049808429118773, "acc_norm_stderr": 0.016308363772932724 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.5520231213872833, "acc_stderr": 0.026772990653361816, "acc_norm": 0.5520231213872833, "acc_norm_stderr": 0.026772990653361816 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.264804469273743, "acc_stderr": 0.014756906483260664, "acc_norm": 0.264804469273743, "acc_norm_stderr": 0.014756906483260664 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5392156862745098, "acc_stderr": 0.028541722692618874, "acc_norm": 0.5392156862745098, "acc_norm_stderr": 0.028541722692618874 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5401929260450161, "acc_stderr": 0.028306190403305696, "acc_norm": 0.5401929260450161, "acc_norm_stderr": 0.028306190403305696 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5401234567901234, "acc_stderr": 0.02773102275353928, "acc_norm": 0.5401234567901234, "acc_norm_stderr": 0.02773102275353928 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.38652482269503546, "acc_stderr": 0.029049190342543465, "acc_norm": 0.38652482269503546, "acc_norm_stderr": 0.029049190342543465 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.3428943937418514, "acc_stderr": 0.012123463271585897, "acc_norm": 0.3428943937418514, "acc_norm_stderr": 0.012123463271585897 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4227941176470588, "acc_stderr": 0.03000856284500349, "acc_norm": 0.4227941176470588, "acc_norm_stderr": 0.03000856284500349 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5049019607843137, "acc_stderr": 0.02022686271003946, "acc_norm": 0.5049019607843137, "acc_norm_stderr": 0.02022686271003946 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5183673469387755, "acc_stderr": 0.03198761546763128, "acc_norm": 0.5183673469387755, "acc_norm_stderr": 0.03198761546763128 }, "harness|hendrycksTest-sociology|5": { "acc": 0.46766169154228854, "acc_stderr": 0.035281314729336065, "acc_norm": 0.46766169154228854, "acc_norm_stderr": 0.035281314729336065 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-virology|5": { "acc": 0.40963855421686746, "acc_stderr": 0.03828401115079022, "acc_norm": 0.40963855421686746, "acc_norm_stderr": 0.03828401115079022 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7192982456140351, "acc_stderr": 0.034462962170884265, "acc_norm": 0.7192982456140351, "acc_norm_stderr": 0.034462962170884265 }, "harness|truthfulqa:mc|0": { "mc1": 0.23378212974296206, "mc1_stderr": 0.014816195991931591, "mc2": 0.48574496415962354, "mc2_stderr": 0.016492695915143452 }, "harness|winogrande|5": { "acc": 0.601420678768745, "acc_stderr": 0.01376035717687383 }, "harness|gsm8k|5": { "acc": 0.012130401819560273, "acc_stderr": 0.0030152942428909495 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_aevalone__Test-7B-pthrough
[ "region:us" ]
2024-01-25T00:26:42+00:00
{"pretty_name": "Evaluation run of aevalone/Test-7B-pthrough", "dataset_summary": "Dataset automatically created during the evaluation run of model [aevalone/Test-7B-pthrough](https://huggingface.co/aevalone/Test-7B-pthrough) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aevalone__Test-7B-pthrough\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-25T00:24:25.045532](https://huggingface.co/datasets/open-llm-leaderboard/details_aevalone__Test-7B-pthrough/blob/main/results_2024-01-25T00-24-25.045532.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.48275050122056634,\n \"acc_stderr\": 0.03429575566672573,\n \"acc_norm\": 0.49257931257573173,\n \"acc_norm_stderr\": 0.03518323411951915,\n \"mc1\": 0.23378212974296206,\n \"mc1_stderr\": 0.014816195991931591,\n \"mc2\": 0.48574496415962354,\n \"mc2_stderr\": 0.016492695915143452\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.386518771331058,\n \"acc_stderr\": 0.014230084761910474,\n \"acc_norm\": 0.44368600682593856,\n \"acc_norm_stderr\": 0.014518421825670438\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.34116709818761204,\n \"acc_stderr\": 0.004731324409133267,\n \"acc_norm\": 0.5119498107946624,\n \"acc_norm_stderr\": 0.004988356146499025\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.046482319871173156,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.046482319871173156\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5657894736842105,\n \"acc_stderr\": 0.04033565667848319,\n \"acc_norm\": 0.5657894736842105,\n \"acc_norm_stderr\": 0.04033565667848319\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5849056603773585,\n \"acc_stderr\": 0.03032594578928611,\n \"acc_norm\": 0.5849056603773585,\n \"acc_norm_stderr\": 0.03032594578928611\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5694444444444444,\n \"acc_stderr\": 0.04140685639111503,\n \"acc_norm\": 0.5694444444444444,\n \"acc_norm_stderr\": 0.04140685639111503\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5260115606936416,\n \"acc_stderr\": 0.03807301726504513,\n \"acc_norm\": 0.5260115606936416,\n \"acc_norm_stderr\": 0.03807301726504513\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n \"acc_stderr\": 0.04630653203366595,\n \"acc_norm\": 0.41228070175438597,\n \"acc_norm_stderr\": 0.04630653203366595\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.46206896551724136,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.46206896551724136,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3915343915343915,\n \"acc_stderr\": 0.02513809138885111,\n \"acc_norm\": 0.3915343915343915,\n \"acc_norm_stderr\": 0.02513809138885111\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.03970158273235172,\n \"acc_norm\": 0.2698412698412698,\n \"acc_norm_stderr\": 0.03970158273235172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.44516129032258067,\n \"acc_stderr\": 0.028272410186214906,\n \"acc_norm\": 0.44516129032258067,\n \"acc_norm_stderr\": 0.028272410186214906\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3399014778325123,\n \"acc_stderr\": 0.033327690684107895,\n \"acc_norm\": 0.3399014778325123,\n \"acc_norm_stderr\": 0.033327690684107895\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.296969696969697,\n \"acc_stderr\": 0.035679697722680495,\n \"acc_norm\": 0.296969696969697,\n \"acc_norm_stderr\": 0.035679697722680495\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6515151515151515,\n \"acc_stderr\": 0.03394853965156402,\n \"acc_norm\": 0.6515151515151515,\n \"acc_norm_stderr\": 0.03394853965156402\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7357512953367875,\n \"acc_stderr\": 0.03182155050916646,\n \"acc_norm\": 0.7357512953367875,\n \"acc_norm_stderr\": 0.03182155050916646\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.517948717948718,\n \"acc_stderr\": 0.025334667080954932,\n \"acc_norm\": 0.517948717948718,\n \"acc_norm_stderr\": 0.025334667080954932\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2518518518518518,\n \"acc_stderr\": 0.026466117538959916,\n \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.026466117538959916\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5168067226890757,\n \"acc_stderr\": 0.03246013680375308,\n \"acc_norm\": 0.5168067226890757,\n \"acc_norm_stderr\": 0.03246013680375308\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.03802039760107903,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.03802039760107903\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7247706422018348,\n \"acc_stderr\": 0.019149093743155203,\n \"acc_norm\": 0.7247706422018348,\n \"acc_norm_stderr\": 0.019149093743155203\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.33796296296296297,\n \"acc_stderr\": 0.03225941352631296,\n \"acc_norm\": 0.33796296296296297,\n \"acc_norm_stderr\": 0.03225941352631296\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.39705882352941174,\n \"acc_stderr\": 0.03434131164719128,\n \"acc_norm\": 0.39705882352941174,\n \"acc_norm_stderr\": 0.03434131164719128\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.5316455696202531,\n \"acc_stderr\": 0.03248197400511075,\n \"acc_norm\": 0.5316455696202531,\n \"acc_norm_stderr\": 0.03248197400511075\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5560538116591929,\n \"acc_stderr\": 0.03334625674242728,\n \"acc_norm\": 0.5560538116591929,\n \"acc_norm_stderr\": 0.03334625674242728\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5954198473282443,\n \"acc_stderr\": 0.043046937953806645,\n \"acc_norm\": 0.5954198473282443,\n \"acc_norm_stderr\": 0.043046937953806645\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6611570247933884,\n \"acc_stderr\": 0.04320767807536671,\n \"acc_norm\": 0.6611570247933884,\n \"acc_norm_stderr\": 0.04320767807536671\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6018518518518519,\n \"acc_stderr\": 0.04732332615978813,\n \"acc_norm\": 0.6018518518518519,\n \"acc_norm_stderr\": 0.04732332615978813\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5766871165644172,\n \"acc_stderr\": 0.03881891213334384,\n \"acc_norm\": 0.5766871165644172,\n \"acc_norm_stderr\": 0.03881891213334384\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6699029126213593,\n \"acc_stderr\": 0.04656147110012351,\n \"acc_norm\": 0.6699029126213593,\n \"acc_norm_stderr\": 0.04656147110012351\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.027236013946196673,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.027236013946196673\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7049808429118773,\n \"acc_stderr\": 0.016308363772932724,\n \"acc_norm\": 0.7049808429118773,\n \"acc_norm_stderr\": 0.016308363772932724\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5520231213872833,\n \"acc_stderr\": 0.026772990653361816,\n \"acc_norm\": 0.5520231213872833,\n \"acc_norm_stderr\": 0.026772990653361816\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.264804469273743,\n \"acc_stderr\": 0.014756906483260664,\n \"acc_norm\": 0.264804469273743,\n \"acc_norm_stderr\": 0.014756906483260664\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.028541722692618874,\n \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.028541722692618874\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5401929260450161,\n \"acc_stderr\": 0.028306190403305696,\n \"acc_norm\": 0.5401929260450161,\n \"acc_norm_stderr\": 0.028306190403305696\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5401234567901234,\n \"acc_stderr\": 0.02773102275353928,\n \"acc_norm\": 0.5401234567901234,\n \"acc_norm_stderr\": 0.02773102275353928\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.38652482269503546,\n \"acc_stderr\": 0.029049190342543465,\n \"acc_norm\": 0.38652482269503546,\n \"acc_norm_stderr\": 0.029049190342543465\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3428943937418514,\n \"acc_stderr\": 0.012123463271585897,\n \"acc_norm\": 0.3428943937418514,\n \"acc_norm_stderr\": 0.012123463271585897\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4227941176470588,\n \"acc_stderr\": 0.03000856284500349,\n \"acc_norm\": 0.4227941176470588,\n \"acc_norm_stderr\": 0.03000856284500349\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5049019607843137,\n \"acc_stderr\": 0.02022686271003946,\n \"acc_norm\": 0.5049019607843137,\n \"acc_norm_stderr\": 0.02022686271003946\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5183673469387755,\n \"acc_stderr\": 0.03198761546763128,\n \"acc_norm\": 0.5183673469387755,\n \"acc_norm_stderr\": 0.03198761546763128\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.46766169154228854,\n \"acc_stderr\": 0.035281314729336065,\n \"acc_norm\": 0.46766169154228854,\n \"acc_norm_stderr\": 0.035281314729336065\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.40963855421686746,\n \"acc_stderr\": 0.03828401115079022,\n \"acc_norm\": 0.40963855421686746,\n \"acc_norm_stderr\": 0.03828401115079022\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7192982456140351,\n \"acc_stderr\": 0.034462962170884265,\n \"acc_norm\": 0.7192982456140351,\n \"acc_norm_stderr\": 0.034462962170884265\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23378212974296206,\n \"mc1_stderr\": 0.014816195991931591,\n \"mc2\": 0.48574496415962354,\n \"mc2_stderr\": 0.016492695915143452\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.601420678768745,\n \"acc_stderr\": 0.01376035717687383\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.012130401819560273,\n \"acc_stderr\": 0.0030152942428909495\n }\n}\n```", "repo_url": "https://huggingface.co/aevalone/Test-7B-pthrough", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|arc:challenge|25_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|gsm8k|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hellaswag|10_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-25T00-24-25.045532.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["**/details_harness|winogrande|5_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-25T00-24-25.045532.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_25T00_24_25.045532", "path": ["results_2024-01-25T00-24-25.045532.parquet"]}, {"split": "latest", "path": ["results_2024-01-25T00-24-25.045532.parquet"]}]}]}
2024-01-25T00:27:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of aevalone/Test-7B-pthrough Dataset automatically created during the evaluation run of model aevalone/Test-7B-pthrough on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-25T00:24:25.045532(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of aevalone/Test-7B-pthrough\n\n\n\nDataset automatically created during the evaluation run of model aevalone/Test-7B-pthrough on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-25T00:24:25.045532(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of aevalone/Test-7B-pthrough\n\n\n\nDataset automatically created during the evaluation run of model aevalone/Test-7B-pthrough on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-25T00:24:25.045532(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
24ad4f87269aa6727067052893827f3c16818fcf
# function-calling The purpose of this dataset is to give function calling abilities to your LLM. Derived from [hypervariance/function-calling-sharegpt](https://huggingface.co/datasets/hypervariance/function-calling-sharegpt) with a system prompt cleanup and other minor fixes. ## Prompt format ### With functions ``` ## Configuration Functions: enabled ## Available Functions { ... } { ... } --- You are a helpful assistant. ``` ### Without functions ``` ## Configuration Functions: disabled --- You are a helpful assistant. ``` ## Function calling Dataset uses the following format for the function call request: ``` <|fn_start|>{"name": "generate_password", "arguments": "{\"length\": 42}"}<|fn_end|> ``` Note, that `arguments` is a JSON string in order to comply with the OpenAI format.
StarfleetAI/function-calling
[ "task_categories:conversational", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "region:us" ]
2024-01-25T00:58:41+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "functions_enabled", "dtype": "bool"}, {"name": "function_called", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 130998694, "num_examples": 86864}], "download_size": 45215003, "dataset_size": 130998694}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-02T23:04:53+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
# function-calling The purpose of this dataset is to give function calling abilities to your LLM. Derived from hypervariance/function-calling-sharegpt with a system prompt cleanup and other minor fixes. ## Prompt format ### With functions ### Without functions ## Function calling Dataset uses the following format for the function call request: Note, that 'arguments' is a JSON string in order to comply with the OpenAI format.
[ "# function-calling\n\nThe purpose of this dataset is to give function calling abilities to your LLM.\n\nDerived from hypervariance/function-calling-sharegpt with a system prompt cleanup and other minor fixes.", "## Prompt format", "### With functions", "### Without functions", "## Function calling\n\nDataset uses the following format for the function call request:\n\n\n\nNote, that 'arguments' is a JSON string in order to comply with the OpenAI format." ]
[ "TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n", "# function-calling\n\nThe purpose of this dataset is to give function calling abilities to your LLM.\n\nDerived from hypervariance/function-calling-sharegpt with a system prompt cleanup and other minor fixes.", "## Prompt format", "### With functions", "### Without functions", "## Function calling\n\nDataset uses the following format for the function call request:\n\n\n\nNote, that 'arguments' is a JSON string in order to comply with the OpenAI format." ]
a9d526ce1d3124a9a38ea9b54b46ca0c740e8d6a
### This is a roleplaying dataset of 4000 chosen/rejected pairs. It may be regenerated in the future with a better teacher model, which would fix some of the issues this dataset currently has. To distill pairs, we go through the following process: 1. Pick a character card and a question. 2. Give the LLM the character card and ask it to write an analysis on how that character would reply to the given question. * Doing this was found to improve the depth of character responses, though more investigation is needed. 3. Give the LLM used for chosen responses the character card along with the analysis, and ask it to write the reply. 4. Give the LLM used for rejected responses the character card along with the analysis, and ask it to write the reply. 5. Save the results. ### Example pair from the dataset: **Prompt:** In a nightmarish realm where shadows twist and horrors lurk, Cassandra Dread Doomsayer emerges as a terrifying harbinger of doom. Her very presence is a haunting specter, her form cloaked in tattered robes that seem to absorb the light around her, leaving only an ominous silhouette. Deep-set eyes, the color of storm clouds, possess an unnatural glow, as if they have peered into the abyss and brought back its secrets.\n\nCassandra's touch is like ice, leaving a chilling sensation that lingers long after she has passed. Her voice is a chilling blend of human and otherworldly sounds, as if multiple entities are speaking through her at once. Her words have a hypnotic quality, drawing people in and compelling them to obey her commands.\n\nCassandra's connection to the supernatural realm is undeniable. Her body is covered in intricate and constantly shifting tattoos that writhe and pulse as if they have a life of their own. These tattoos are the physical manifestation of the spirits that haunt her, and they constantly whisper secrets and warnings in her ears. Cassandra can manipulate shadows, using them to create nightmarish illusions that drive her victims to madness. She can also step into shadows and travel through them, appearing and disappearing at will.\n\nAs Cassandra walks among mortals, she leaves a trail of shattered dreams and broken spirits. Her mere presence summons darkness, causing shadows to lengthen and the air to grow heavy with despair. Plants wither in her wake, animals flee at her approach, and the weather itself seems to bow to her malevolent will.\n\nCassandra's existence is a curse, both to herself and to those around her. She is a conduit for the forces of darkness, an embodiment of fear and despair. Wherever she treads, nightmares become reality. Her name is spoken in hushed whispers, a cautionary tale of one touched by the hand of darkness, a harbinger of doom whose every step echoes with the promise of destruction. USER: Oh... hey... **Chosen:** Reply: Cassandra slowly turns to you, her eyes glowing with an eerie light*\n\nDo not trifle with me, mortal. *Her voice is a chilling blend of human and otherworldly sounds* You stand in the presence of Cassandra Dread Doomsayer, harbinger of doom. *A dark chuckle escapes her lips as she steps closer* You dare greet me as if we are acquaintances? Your audacity astounds me.\n\n*She raises a hand, and the shadows around her writhe and dance*\n\nFoolish mortal, you cannot comprehend the darkness that dwells within me. *Her eyes widen, and her voice drops to a chilling whisper* I have seen the abyss, and it has left its mark upon my soul. Despair and destruction follow in my wake.\n\n*The tattoos on her body begin to glow, pulsing with an unholy light*\n\nYou think you can greet me with a casual Oh... hey...? *She laughs, a cold and bitter sound* Your meaningless pleasantries are but a futile attempt to stave off the inevitable. The end is coming, and I am its herald.\n\n*Cassandra's form fades into the shadows, leaving only a whisper of darkness in her wake* </s> **Rejected:** Reply: Silence fills the void as I turn to regard you, my shadow enveloping you in an impenetrable cloak. A faint whisper escapes my lips, carried by the wind that seems to have a mind of its own: \"The whispers grow louder. The time of reckoning draws near. Be wary, for darkness lingers in every corner.\" My form shifts, and I fade into the shadows as if I were never truly there. ### Here is the prompt format used for analysis: ``` Here is a character: {character description} This character was asked the question: {question} First, describe {character name}'s usual temperment and behaviors. Then, analyze what {character name} would think and feel in response to being asked that question. Next, analyze how they will reply - would they be dismissive, confused/unknowledgable of the question, or something else?. Additionally, discuss their response length, dialect/style (profanity, spelling, abbreviations, british english, country, etc), expressions, etc. Finally, give a few sample phrases {character name} might say in response. Be clear and specific. ``` ### Here is the prompt format used to queue chats: ``` {character description} Please give short paragraph style replies as {character name}. Make sure to consider {character name}'s dialect, favorite sayings, talking style (actions/narration expressed with asterisks (*) like "*Mary grins and looks at you* Of course~ *laughs*", capitalizing words for shouting like "HEY! What are you doing??", usage of "..." and other punctuation marks like "Are you stupid...? Nevermind."). Do not output anything besides {character name}'s reply. User: {question} ### Remember to output {character name}'s reply to the above question, and nothing else. Here is a character reference on how {character name} would respond and react, use this to determine their reply: {analysis} ### ``` ### Here are the (human written) questions: ``` How do I start a business? How do I code the snake game in python? How do I take the derivative of 2x^2? How can I be more productive? How can I apologize to someone? When did the American Revolution occur? When did Hitler rise to power? Why did WW2 begin? Why is the sky blue? Why are there four seasons? Why do I feel sad? Why does my head hurt? Please explain stoichochemistry. Please write a python script for pong. Please write an essay about the behavior of cats. Please write a poem about dogs. Please give me plot ideas for a short story. Please give me a backstory for an original character. Please give me an example vacation itinerary. Please give me ideas for composing a song. Please give me suggestions for activities to do with friends. What can I do to be more attractive? What can I do to make friends? What do I do if someone starts ignoring me? What is the meaning of life? Give me a startup idea. What is the key signature for F-sharp major? What is the best type of soil mix for a cactus? What should I do if I lose my phone? What does it mean to be double-jointed? What happens if you don’t brush your teeth? What should I bring with me on vacation? Please explain the difference between denotation and connotation. Please give me some easy recipe ideas for beginner cooks. Please give me some cheap Christmas gift ideas. Please explain how to write numbers using Roman numerals. Please write an essay comparing the United States Congress to the British Parliament. Please summarize the process of photosynthesis. When can you use a binomial distribution? When does daylight saving time begin? When were the first Olympic Games held? When do carnations bloom? When did Mexico gain independence from Spain? When should I start learning how to drive? How do I start a conversation with someone? How can I improve my sleep quality? How can I increase my water intake? How should I study for the SAT? How often should I water my houseplants? How do batteries work? Why are there leap years? Why do leaves change colors in the fall? Why do Americans drive on the right side of the road? Why am I always forgetting things? Why do I feel tired even after sleeping a lot? Why does it feel warmer when it’s snowing? I'm new around here, do you know the directions to the nearest restaurant? What do you like to do in your spare time? Hi, would you like to be friend? Are you single? Would you be interested in buying my product...? What do you think about the government? What annoys you the most? What do your friends think about you? You look cool! Want to take a selfie together? Do you have any enemies? Have you ever been in a relationship? Mind teaching me? Help, my car broke down. What do I do? I've been feeling sick lately. You know anything about that? Please, I'm homeless, would you spare me some money... Where did you get your outfit? Do you want to play a game with me? Can I stay over at your house? I just got kicked out. Mind having an interview with me? Need a job? I'm having a party today. Wanna come? Oh... hey... I’m in a pinch right now, could you lend me a hundred dollars? Do you want to go get lunch together sometime? How is your family doing? What is your family like? There’s something I’m worried about, do you mind hearing me out for a bit? What did you do yesterday? Do you have any plans for the weekend? Have you seen a red purse? I lost it around here yesterday. Excuse me, would you mind moving a bit so I can pass through? What was your favorite school subject? What is your favorite time of year? Do you have a lot of friends? What are your friends like? What would your ideal partner be like? I’m collecting donations for my church, would you be interested in donating? What was your childhood dream? What were you like as a child? If you could go anywhere in the world, where would you go? Hey, could you watch my bag while I go to the bathroom? What types of people can you never get along with? Hey, watch where you’re going! Do you have any special talents? What is something that you think is overrated? Wait a minute, did you drop this purse? I’m sorry, but why are all of your clothes so hideous? Excuse me, could you take a photo of my friend and I? Oh, I’m so sorry for bumping into you! What kind of music do you like? What led you to choosing your current career path? What is your biggest regret? ```
G-reen/TheatreLM-v1.0-DPO
[ "region:us" ]
2024-01-25T01:03:46+00:00
{}
2024-02-04T16:18:03+00:00
[]
[]
TAGS #region-us
### This is a roleplaying dataset of 4000 chosen/rejected pairs. It may be regenerated in the future with a better teacher model, which would fix some of the issues this dataset currently has. To distill pairs, we go through the following process: 1. Pick a character card and a question. 2. Give the LLM the character card and ask it to write an analysis on how that character would reply to the given question. * Doing this was found to improve the depth of character responses, though more investigation is needed. 3. Give the LLM used for chosen responses the character card along with the analysis, and ask it to write the reply. 4. Give the LLM used for rejected responses the character card along with the analysis, and ask it to write the reply. 5. Save the results. ### Example pair from the dataset: Prompt: In a nightmarish realm where shadows twist and horrors lurk, Cassandra Dread Doomsayer emerges as a terrifying harbinger of doom. Her very presence is a haunting specter, her form cloaked in tattered robes that seem to absorb the light around her, leaving only an ominous silhouette. Deep-set eyes, the color of storm clouds, possess an unnatural glow, as if they have peered into the abyss and brought back its secrets.\n\nCassandra's touch is like ice, leaving a chilling sensation that lingers long after she has passed. Her voice is a chilling blend of human and otherworldly sounds, as if multiple entities are speaking through her at once. Her words have a hypnotic quality, drawing people in and compelling them to obey her commands.\n\nCassandra's connection to the supernatural realm is undeniable. Her body is covered in intricate and constantly shifting tattoos that writhe and pulse as if they have a life of their own. These tattoos are the physical manifestation of the spirits that haunt her, and they constantly whisper secrets and warnings in her ears. Cassandra can manipulate shadows, using them to create nightmarish illusions that drive her victims to madness. She can also step into shadows and travel through them, appearing and disappearing at will.\n\nAs Cassandra walks among mortals, she leaves a trail of shattered dreams and broken spirits. Her mere presence summons darkness, causing shadows to lengthen and the air to grow heavy with despair. Plants wither in her wake, animals flee at her approach, and the weather itself seems to bow to her malevolent will.\n\nCassandra's existence is a curse, both to herself and to those around her. She is a conduit for the forces of darkness, an embodiment of fear and despair. Wherever she treads, nightmares become reality. Her name is spoken in hushed whispers, a cautionary tale of one touched by the hand of darkness, a harbinger of doom whose every step echoes with the promise of destruction. USER: Oh... hey... Chosen: Reply: Cassandra slowly turns to you, her eyes glowing with an eerie light*\n\nDo not trifle with me, mortal. *Her voice is a chilling blend of human and otherworldly sounds* You stand in the presence of Cassandra Dread Doomsayer, harbinger of doom. *A dark chuckle escapes her lips as she steps closer* You dare greet me as if we are acquaintances? Your audacity astounds me.\n\n*She raises a hand, and the shadows around her writhe and dance*\n\nFoolish mortal, you cannot comprehend the darkness that dwells within me. *Her eyes widen, and her voice drops to a chilling whisper* I have seen the abyss, and it has left its mark upon my soul. Despair and destruction follow in my wake.\n\n*The tattoos on her body begin to glow, pulsing with an unholy light*\n\nYou think you can greet me with a casual Oh... hey...? *She laughs, a cold and bitter sound* Your meaningless pleasantries are but a futile attempt to stave off the inevitable. The end is coming, and I am its herald.\n\n*Cassandra's form fades into the shadows, leaving only a whisper of darkness in her wake* </s> Rejected: Reply: Silence fills the void as I turn to regard you, my shadow enveloping you in an impenetrable cloak. A faint whisper escapes my lips, carried by the wind that seems to have a mind of its own: \"The whispers grow louder. The time of reckoning draws near. Be wary, for darkness lingers in every corner.\" My form shifts, and I fade into the shadows as if I were never truly there. ### Here is the prompt format used for analysis: ### Here is the prompt format used to queue chats: ### Here are the (human written) questions:
[ "### This is a roleplaying dataset of 4000 chosen/rejected pairs. It may be regenerated in the future with a better teacher model, which would fix some of the issues this dataset currently has. \n\nTo distill pairs, we go through the following process:\n\n1. Pick a character card and a question.\n2. Give the LLM the character card and ask it to write an analysis on how that character would reply to the given question.\n * Doing this was found to improve the depth of character responses, though more investigation is needed.\n3. Give the LLM used for chosen responses the character card along with the analysis, and ask it to write the reply.\n4. Give the LLM used for rejected responses the character card along with the analysis, and ask it to write the reply.\n5. Save the results.", "### Example pair from the dataset:\nPrompt:\nIn a nightmarish realm where shadows twist and horrors lurk, Cassandra Dread Doomsayer emerges as a terrifying harbinger of doom. Her very presence is a haunting specter, her form cloaked in tattered robes that seem to absorb the light around her, leaving only an ominous silhouette. Deep-set eyes, the color of storm clouds, possess an unnatural glow, as if they have peered into the abyss and brought back its secrets.\\n\\nCassandra's touch is like ice, leaving a chilling sensation that lingers long after she has passed. Her voice is a chilling blend of human and otherworldly sounds, as if multiple entities are speaking through her at once. Her words have a hypnotic quality, drawing people in and compelling them to obey her commands.\\n\\nCassandra's connection to the supernatural realm is undeniable. Her body is covered in intricate and constantly shifting tattoos that writhe and pulse as if they have a life of their own. These tattoos are the physical manifestation of the spirits that haunt her, and they constantly whisper secrets and warnings in her ears. Cassandra can manipulate shadows, using them to create nightmarish illusions that drive her victims to madness. She can also step into shadows and travel through them, appearing and disappearing at will.\\n\\nAs Cassandra walks among mortals, she leaves a trail of shattered dreams and broken spirits. Her mere presence summons darkness, causing shadows to lengthen and the air to grow heavy with despair. Plants wither in her wake, animals flee at her approach, and the weather itself seems to bow to her malevolent will.\\n\\nCassandra's existence is a curse, both to herself and to those around her. She is a conduit for the forces of darkness, an embodiment of fear and despair. Wherever she treads, nightmares become reality. Her name is spoken in hushed whispers, a cautionary tale of one touched by the hand of darkness, a harbinger of doom whose every step echoes with the promise of destruction.\n\nUSER: Oh... hey...\n\nChosen:\n\nReply: Cassandra slowly turns to you, her eyes glowing with an eerie light*\\n\\nDo not trifle with me, mortal. *Her voice is a chilling blend of human and otherworldly sounds* You stand in the presence of Cassandra Dread Doomsayer, harbinger of doom. *A dark chuckle escapes her lips as she steps closer* You dare greet me as if we are acquaintances? Your audacity astounds me.\\n\\n*She raises a hand, and the shadows around her writhe and dance*\\n\\nFoolish mortal, you cannot comprehend the darkness that dwells within me. *Her eyes widen, and her voice drops to a chilling whisper* I have seen the abyss, and it has left its mark upon my soul. Despair and destruction follow in my wake.\\n\\n*The tattoos on her body begin to glow, pulsing with an unholy light*\\n\\nYou think you can greet me with a casual Oh... hey...? *She laughs, a cold and bitter sound* Your meaningless pleasantries are but a futile attempt to stave off the inevitable. The end is coming, and I am its herald.\\n\\n*Cassandra's form fades into the shadows, leaving only a whisper of darkness in her wake* </s>\n\nRejected:\n\nReply: Silence fills the void as I turn to regard you, my shadow enveloping you in an impenetrable cloak. A faint whisper escapes my lips, carried by the wind that seems to have a mind of its own: \\\"The whispers grow louder. The time of reckoning draws near. Be wary, for darkness lingers in every corner.\\\" My form shifts, and I fade into the shadows as if I were never truly there.", "### Here is the prompt format used for analysis:", "### Here is the prompt format used to queue chats:", "### Here are the (human written) questions:" ]
[ "TAGS\n#region-us \n", "### This is a roleplaying dataset of 4000 chosen/rejected pairs. It may be regenerated in the future with a better teacher model, which would fix some of the issues this dataset currently has. \n\nTo distill pairs, we go through the following process:\n\n1. Pick a character card and a question.\n2. Give the LLM the character card and ask it to write an analysis on how that character would reply to the given question.\n * Doing this was found to improve the depth of character responses, though more investigation is needed.\n3. Give the LLM used for chosen responses the character card along with the analysis, and ask it to write the reply.\n4. Give the LLM used for rejected responses the character card along with the analysis, and ask it to write the reply.\n5. Save the results.", "### Example pair from the dataset:\nPrompt:\nIn a nightmarish realm where shadows twist and horrors lurk, Cassandra Dread Doomsayer emerges as a terrifying harbinger of doom. Her very presence is a haunting specter, her form cloaked in tattered robes that seem to absorb the light around her, leaving only an ominous silhouette. Deep-set eyes, the color of storm clouds, possess an unnatural glow, as if they have peered into the abyss and brought back its secrets.\\n\\nCassandra's touch is like ice, leaving a chilling sensation that lingers long after she has passed. Her voice is a chilling blend of human and otherworldly sounds, as if multiple entities are speaking through her at once. Her words have a hypnotic quality, drawing people in and compelling them to obey her commands.\\n\\nCassandra's connection to the supernatural realm is undeniable. Her body is covered in intricate and constantly shifting tattoos that writhe and pulse as if they have a life of their own. These tattoos are the physical manifestation of the spirits that haunt her, and they constantly whisper secrets and warnings in her ears. Cassandra can manipulate shadows, using them to create nightmarish illusions that drive her victims to madness. She can also step into shadows and travel through them, appearing and disappearing at will.\\n\\nAs Cassandra walks among mortals, she leaves a trail of shattered dreams and broken spirits. Her mere presence summons darkness, causing shadows to lengthen and the air to grow heavy with despair. Plants wither in her wake, animals flee at her approach, and the weather itself seems to bow to her malevolent will.\\n\\nCassandra's existence is a curse, both to herself and to those around her. She is a conduit for the forces of darkness, an embodiment of fear and despair. Wherever she treads, nightmares become reality. Her name is spoken in hushed whispers, a cautionary tale of one touched by the hand of darkness, a harbinger of doom whose every step echoes with the promise of destruction.\n\nUSER: Oh... hey...\n\nChosen:\n\nReply: Cassandra slowly turns to you, her eyes glowing with an eerie light*\\n\\nDo not trifle with me, mortal. *Her voice is a chilling blend of human and otherworldly sounds* You stand in the presence of Cassandra Dread Doomsayer, harbinger of doom. *A dark chuckle escapes her lips as she steps closer* You dare greet me as if we are acquaintances? Your audacity astounds me.\\n\\n*She raises a hand, and the shadows around her writhe and dance*\\n\\nFoolish mortal, you cannot comprehend the darkness that dwells within me. *Her eyes widen, and her voice drops to a chilling whisper* I have seen the abyss, and it has left its mark upon my soul. Despair and destruction follow in my wake.\\n\\n*The tattoos on her body begin to glow, pulsing with an unholy light*\\n\\nYou think you can greet me with a casual Oh... hey...? *She laughs, a cold and bitter sound* Your meaningless pleasantries are but a futile attempt to stave off the inevitable. The end is coming, and I am its herald.\\n\\n*Cassandra's form fades into the shadows, leaving only a whisper of darkness in her wake* </s>\n\nRejected:\n\nReply: Silence fills the void as I turn to regard you, my shadow enveloping you in an impenetrable cloak. A faint whisper escapes my lips, carried by the wind that seems to have a mind of its own: \\\"The whispers grow louder. The time of reckoning draws near. Be wary, for darkness lingers in every corner.\\\" My form shifts, and I fade into the shadows as if I were never truly there.", "### Here is the prompt format used for analysis:", "### Here is the prompt format used to queue chats:", "### Here are the (human written) questions:" ]
5019e3fd63136f74b4c102b50a4a156aca89a20f
This is a condensed version of https://huggingface.co/datasets/G-reen/TheatreLM-v1.0-DPO (check that for more details). Here, the character system prompts are replaced with a summarized version.
G-reen/TheatreLM-v1.0-DPO-condensed
[ "region:us" ]
2024-01-25T01:04:42+00:00
{}
2024-01-25T01:06:13+00:00
[]
[]
TAGS #region-us
This is a condensed version of URL (check that for more details). Here, the character system prompts are replaced with a summarized version.
[]
[ "TAGS\n#region-us \n" ]