sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
2a4420f4dfe8093e8dbdf6fcea9baffeb47033a6
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-tuned <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-ia3-tuned](https://huggingface.co/SCE/Mistral-7B-summ-ia3-tuned) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-tuned", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T08:13:56.817298](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-tuned/blob/main/results_2024-01-29T08-13-56.817298.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5939078794901971, "acc_stderr": 0.03335110320218946, "acc_norm": 0.5990755019629093, "acc_norm_stderr": 0.034037975038657735, "mc1": 0.5458996328029376, "mc1_stderr": 0.017429593091323522, "mc2": 0.6860006822932447, "mc2_stderr": 0.015499769302819941 }, "harness|arc:challenge|25": { "acc": 0.5494880546075085, "acc_stderr": 0.014539646098471625, "acc_norm": 0.5964163822525598, "acc_norm_stderr": 0.014337158914268448 }, "harness|hellaswag|10": { "acc": 0.6739693288189603, "acc_stderr": 0.004678006403691714, "acc_norm": 0.8471420035849433, "acc_norm_stderr": 0.003591151323268333 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.04688261722621503, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621503 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5703703703703704, "acc_stderr": 0.04276349494376599, "acc_norm": 0.5703703703703704, "acc_norm_stderr": 0.04276349494376599 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6118421052631579, "acc_stderr": 0.03965842097512744, "acc_norm": 0.6118421052631579, "acc_norm_stderr": 0.03965842097512744 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6736111111111112, "acc_stderr": 0.03921067198982266, "acc_norm": 0.6736111111111112, "acc_norm_stderr": 0.03921067198982266 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.53, "acc_stderr": 0.050161355804659205, "acc_norm": 0.53, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5722543352601156, "acc_stderr": 0.03772446857518026, "acc_norm": 0.5722543352601156, "acc_norm_stderr": 0.03772446857518026 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4117647058823529, "acc_stderr": 0.048971049527263666, "acc_norm": 0.4117647058823529, "acc_norm_stderr": 0.048971049527263666 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.49361702127659574, "acc_stderr": 0.03268335899936336, "acc_norm": 0.49361702127659574, "acc_norm_stderr": 0.03268335899936336 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.046151869625837026, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.046151869625837026 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5448275862068965, "acc_stderr": 0.04149886942192117, "acc_norm": 0.5448275862068965, "acc_norm_stderr": 0.04149886942192117 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3783068783068783, "acc_stderr": 0.02497695405315525, "acc_norm": 0.3783068783068783, "acc_norm_stderr": 0.02497695405315525 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4126984126984127, "acc_stderr": 0.04403438954768176, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.04403438954768176 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6451612903225806, "acc_stderr": 0.027218889773308753, "acc_norm": 0.6451612903225806, "acc_norm_stderr": 0.027218889773308753 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.46798029556650245, "acc_stderr": 0.035107665979592154, "acc_norm": 0.46798029556650245, "acc_norm_stderr": 0.035107665979592154 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.63, "acc_stderr": 0.04852365870939098, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939098 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7151515151515152, "acc_stderr": 0.03524390844511781, "acc_norm": 0.7151515151515152, "acc_norm_stderr": 0.03524390844511781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7373737373737373, "acc_stderr": 0.031353050095330855, "acc_norm": 0.7373737373737373, "acc_norm_stderr": 0.031353050095330855 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8601036269430051, "acc_stderr": 0.025033870583015178, "acc_norm": 0.8601036269430051, "acc_norm_stderr": 0.025033870583015178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5717948717948718, "acc_stderr": 0.025088301454694834, "acc_norm": 0.5717948717948718, "acc_norm_stderr": 0.025088301454694834 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3, "acc_stderr": 0.027940457136228402, "acc_norm": 0.3, "acc_norm_stderr": 0.027940457136228402 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6638655462184874, "acc_stderr": 0.030684737115135363, "acc_norm": 0.6638655462184874, "acc_norm_stderr": 0.030684737115135363 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.03861557546255169, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.03861557546255169 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7926605504587156, "acc_stderr": 0.017381415563608678, "acc_norm": 0.7926605504587156, "acc_norm_stderr": 0.017381415563608678 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4444444444444444, "acc_stderr": 0.03388857118502326, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.03388857118502326 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7205882352941176, "acc_stderr": 0.03149328104507957, "acc_norm": 0.7205882352941176, "acc_norm_stderr": 0.03149328104507957 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7426160337552743, "acc_stderr": 0.02845882099146031, "acc_norm": 0.7426160337552743, "acc_norm_stderr": 0.02845882099146031 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6143497757847534, "acc_stderr": 0.03266842214289201, "acc_norm": 0.6143497757847534, "acc_norm_stderr": 0.03266842214289201 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6870229007633588, "acc_stderr": 0.04066962905677698, "acc_norm": 0.6870229007633588, "acc_norm_stderr": 0.04066962905677698 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7851239669421488, "acc_stderr": 0.037494924487096966, "acc_norm": 0.7851239669421488, "acc_norm_stderr": 0.037494924487096966 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6666666666666666, "acc_stderr": 0.04557239513497752, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.04557239513497752 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7361963190184049, "acc_stderr": 0.03462419931615623, "acc_norm": 0.7361963190184049, "acc_norm_stderr": 0.03462419931615623 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.38392857142857145, "acc_stderr": 0.04616143075028547, "acc_norm": 0.38392857142857145, "acc_norm_stderr": 0.04616143075028547 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8418803418803419, "acc_stderr": 0.023902325549560396, "acc_norm": 0.8418803418803419, "acc_norm_stderr": 0.023902325549560396 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7675606641123882, "acc_stderr": 0.015104550008905707, "acc_norm": 0.7675606641123882, "acc_norm_stderr": 0.015104550008905707 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.684971098265896, "acc_stderr": 0.025009313790069727, "acc_norm": 0.684971098265896, "acc_norm_stderr": 0.025009313790069727 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2849162011173184, "acc_stderr": 0.015096222302469809, "acc_norm": 0.2849162011173184, "acc_norm_stderr": 0.015096222302469809 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6568627450980392, "acc_stderr": 0.02718449890994162, "acc_norm": 0.6568627450980392, "acc_norm_stderr": 0.02718449890994162 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6816720257234726, "acc_stderr": 0.026457225067811025, "acc_norm": 0.6816720257234726, "acc_norm_stderr": 0.026457225067811025 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7037037037037037, "acc_stderr": 0.025407197798890165, "acc_norm": 0.7037037037037037, "acc_norm_stderr": 0.025407197798890165 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4716312056737589, "acc_stderr": 0.029779450957303062, "acc_norm": 0.4716312056737589, "acc_norm_stderr": 0.029779450957303062 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.41916558018252936, "acc_stderr": 0.012602244505788238, "acc_norm": 0.41916558018252936, "acc_norm_stderr": 0.012602244505788238 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6176470588235294, "acc_stderr": 0.02952009569768776, "acc_norm": 0.6176470588235294, "acc_norm_stderr": 0.02952009569768776 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6274509803921569, "acc_stderr": 0.019559646809215937, "acc_norm": 0.6274509803921569, "acc_norm_stderr": 0.019559646809215937 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7363636363636363, "acc_stderr": 0.04220224692971987, "acc_norm": 0.7363636363636363, "acc_norm_stderr": 0.04220224692971987 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6857142857142857, "acc_stderr": 0.029719329422417468, "acc_norm": 0.6857142857142857, "acc_norm_stderr": 0.029719329422417468 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7164179104477612, "acc_stderr": 0.03187187537919797, "acc_norm": 0.7164179104477612, "acc_norm_stderr": 0.03187187537919797 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-virology|5": { "acc": 0.4879518072289157, "acc_stderr": 0.0389136449583582, "acc_norm": 0.4879518072289157, "acc_norm_stderr": 0.0389136449583582 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.029170885500727665, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.029170885500727665 }, "harness|truthfulqa:mc|0": { "mc1": 0.5458996328029376, "mc1_stderr": 0.017429593091323522, "mc2": 0.6860006822932447, "mc2_stderr": 0.015499769302819941 }, "harness|winogrande|5": { "acc": 0.7671665351223362, "acc_stderr": 0.011878201073856542 }, "harness|gsm8k|5": { "acc": 0.33586050037907506, "acc_stderr": 0.013009224714267369 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-tuned
[ "region:us" ]
2024-01-29T08:16:19+00:00
{"pretty_name": "Evaluation run of SCE/Mistral-7B-summ-ia3-tuned", "dataset_summary": "Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-ia3-tuned](https://huggingface.co/SCE/Mistral-7B-summ-ia3-tuned) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-tuned\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T08:13:56.817298](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-tuned/blob/main/results_2024-01-29T08-13-56.817298.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5939078794901971,\n \"acc_stderr\": 0.03335110320218946,\n \"acc_norm\": 0.5990755019629093,\n \"acc_norm_stderr\": 0.034037975038657735,\n \"mc1\": 0.5458996328029376,\n \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.6860006822932447,\n \"mc2_stderr\": 0.015499769302819941\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5494880546075085,\n \"acc_stderr\": 0.014539646098471625,\n \"acc_norm\": 0.5964163822525598,\n \"acc_norm_stderr\": 0.014337158914268448\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6739693288189603,\n \"acc_stderr\": 0.004678006403691714,\n \"acc_norm\": 0.8471420035849433,\n \"acc_norm_stderr\": 0.003591151323268333\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621503,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621503\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.5703703703703704,\n \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6118421052631579,\n \"acc_stderr\": 0.03965842097512744,\n \"acc_norm\": 0.6118421052631579,\n \"acc_norm_stderr\": 0.03965842097512744\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n \"acc_stderr\": 0.03772446857518026,\n \"acc_norm\": 0.5722543352601156,\n \"acc_norm_stderr\": 0.03772446857518026\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.048971049527263666,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.048971049527263666\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.49361702127659574,\n \"acc_stderr\": 0.03268335899936336,\n \"acc_norm\": 0.49361702127659574,\n \"acc_norm_stderr\": 0.03268335899936336\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3783068783068783,\n \"acc_stderr\": 0.02497695405315525,\n \"acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.02497695405315525\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6451612903225806,\n \"acc_stderr\": 0.027218889773308753,\n \"acc_norm\": 0.6451612903225806,\n \"acc_norm_stderr\": 0.027218889773308753\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592154,\n \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592154\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939098,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939098\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.03524390844511781,\n \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.03524390844511781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7373737373737373,\n \"acc_stderr\": 0.031353050095330855,\n \"acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.031353050095330855\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015178,\n \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5717948717948718,\n \"acc_stderr\": 0.025088301454694834,\n \"acc_norm\": 0.5717948717948718,\n \"acc_norm_stderr\": 0.025088301454694834\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.027940457136228402,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.027940457136228402\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7926605504587156,\n \"acc_stderr\": 0.017381415563608678,\n \"acc_norm\": 0.7926605504587156,\n \"acc_norm_stderr\": 0.017381415563608678\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.03388857118502326,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.03388857118502326\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7205882352941176,\n \"acc_stderr\": 0.03149328104507957,\n \"acc_norm\": 0.7205882352941176,\n \"acc_norm_stderr\": 0.03149328104507957\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7426160337552743,\n \"acc_stderr\": 0.02845882099146031,\n \"acc_norm\": 0.7426160337552743,\n \"acc_norm_stderr\": 0.02845882099146031\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6143497757847534,\n \"acc_stderr\": 0.03266842214289201,\n \"acc_norm\": 0.6143497757847534,\n \"acc_norm_stderr\": 0.03266842214289201\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6870229007633588,\n \"acc_stderr\": 0.04066962905677698,\n \"acc_norm\": 0.6870229007633588,\n \"acc_norm_stderr\": 0.04066962905677698\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04557239513497752,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04557239513497752\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8418803418803419,\n \"acc_stderr\": 0.023902325549560396,\n \"acc_norm\": 0.8418803418803419,\n \"acc_norm_stderr\": 0.023902325549560396\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7675606641123882,\n \"acc_stderr\": 0.015104550008905707,\n \"acc_norm\": 0.7675606641123882,\n \"acc_norm_stderr\": 0.015104550008905707\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069727,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069727\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2849162011173184,\n \"acc_stderr\": 0.015096222302469809,\n \"acc_norm\": 0.2849162011173184,\n \"acc_norm_stderr\": 0.015096222302469809\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6568627450980392,\n \"acc_stderr\": 0.02718449890994162,\n \"acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.02718449890994162\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6816720257234726,\n \"acc_stderr\": 0.026457225067811025,\n \"acc_norm\": 0.6816720257234726,\n \"acc_norm_stderr\": 0.026457225067811025\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.025407197798890165,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.025407197798890165\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.41916558018252936,\n \"acc_stderr\": 0.012602244505788238,\n \"acc_norm\": 0.41916558018252936,\n \"acc_norm_stderr\": 0.012602244505788238\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.02952009569768776,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.02952009569768776\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6274509803921569,\n \"acc_stderr\": 0.019559646809215937,\n \"acc_norm\": 0.6274509803921569,\n \"acc_norm_stderr\": 0.019559646809215937\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6857142857142857,\n \"acc_stderr\": 0.029719329422417468,\n \"acc_norm\": 0.6857142857142857,\n \"acc_norm_stderr\": 0.029719329422417468\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7164179104477612,\n \"acc_stderr\": 0.03187187537919797,\n \"acc_norm\": 0.7164179104477612,\n \"acc_norm_stderr\": 0.03187187537919797\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.0389136449583582,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.0389136449583582\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5458996328029376,\n \"mc1_stderr\": 0.017429593091323522,\n \"mc2\": 0.6860006822932447,\n \"mc2_stderr\": 0.015499769302819941\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856542\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33586050037907506,\n \"acc_stderr\": 0.013009224714267369\n }\n}\n```", "repo_url": "https://huggingface.co/SCE/Mistral-7B-summ-ia3-tuned", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-13-56.817298.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["**/details_harness|winogrande|5_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T08-13-56.817298.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T08_13_56.817298", "path": ["results_2024-01-29T08-13-56.817298.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T08-13-56.817298.parquet"]}]}]}
2024-01-29T08:16:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-tuned Dataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-tuned on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T08:13:56.817298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-tuned\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-tuned on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:13:56.817298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-tuned\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-tuned on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:13:56.817298(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
dc94e533bf344a06469464584db1201e43600e9f
# Webz.io News Dataset Repository [https://github.com/Webhose/free-news-datasets](https://github.com/Webhose/free-news-datasets) ## Introduction Welcome to the [Webz.io](https://webz.io) News Dataset Repository! This repository is created by [Webz.io](https://webz.io) and is dedicated to providing free datasets of publicly available news articles. We release new datasets weekly, each containing around 1,000 news articles focused on various themes, topics, or metadata characteristics like sentiment analysis, and top IPTC categories such as finance, sports, and politics. ## Dataset Overview - **Weekly Releases:** New dataset available every week. - **Thematic Focus:** Datasets based on specific themes, topics, or metadata. - **Rich Metadata:** Includes sentiment analysis, categories, publication dates. - **Diverse Sources:** Articles from a wide range of news websites. ## Usage The datasets are free for academic, research, and journalistic purposes: - **Data Analysis:** For statistical analyses, trend identification, and pattern recognition. - **Machine Learning:** Suitable for training NLP models, sentiment analysis, etc. - **Journalistic Research:** Helps journalists in data-driven storytelling. ## Accessing the Datasets - Browse the repository. - Find a dataset that suits your needs. - Download the dataset with its detailed description and metadata file. ## Contribution Contributions are welcome! If you have suggestions or want to contribute, please open an issue or a pull request. ## Support For questions or support, raise an issue in the repository. ---
rangeva/Webz.io_News_Dataset_Repository
[ "task_categories:text-classification", "size_categories:100M<n<1B", "region:us" ]
2024-01-29T08:16:22+00:00
{"size_categories": ["100M<n<1B"], "task_categories": ["text-classification"], "pretty_name": "Webz.io News Dataset Repository"}
2024-01-29T08:19:46+00:00
[]
[]
TAGS #task_categories-text-classification #size_categories-100M<n<1B #region-us
# URL News Dataset Repository URL ## Introduction Welcome to the URL News Dataset Repository! This repository is created by URL and is dedicated to providing free datasets of publicly available news articles. We release new datasets weekly, each containing around 1,000 news articles focused on various themes, topics, or metadata characteristics like sentiment analysis, and top IPTC categories such as finance, sports, and politics. ## Dataset Overview - Weekly Releases: New dataset available every week. - Thematic Focus: Datasets based on specific themes, topics, or metadata. - Rich Metadata: Includes sentiment analysis, categories, publication dates. - Diverse Sources: Articles from a wide range of news websites. ## Usage The datasets are free for academic, research, and journalistic purposes: - Data Analysis: For statistical analyses, trend identification, and pattern recognition. - Machine Learning: Suitable for training NLP models, sentiment analysis, etc. - Journalistic Research: Helps journalists in data-driven storytelling. ## Accessing the Datasets - Browse the repository. - Find a dataset that suits your needs. - Download the dataset with its detailed description and metadata file. ## Contribution Contributions are welcome! If you have suggestions or want to contribute, please open an issue or a pull request. ## Support For questions or support, raise an issue in the repository. ---
[ "# URL News Dataset Repository\n\nURL", "## Introduction\n\nWelcome to the URL News Dataset Repository! This repository is created by URL and is dedicated to providing free datasets of publicly available news articles. We release new datasets weekly, each containing around 1,000 news articles focused on various themes, topics, or metadata characteristics like sentiment analysis, and top IPTC categories such as finance, sports, and politics.", "## Dataset Overview\n\n- Weekly Releases: New dataset available every week.\n- Thematic Focus: Datasets based on specific themes, topics, or metadata.\n- Rich Metadata: Includes sentiment analysis, categories, publication dates.\n- Diverse Sources: Articles from a wide range of news websites.", "## Usage\n\nThe datasets are free for academic, research, and journalistic purposes:\n\n- Data Analysis: For statistical analyses, trend identification, and pattern recognition.\n- Machine Learning: Suitable for training NLP models, sentiment analysis, etc.\n- Journalistic Research: Helps journalists in data-driven storytelling.", "## Accessing the Datasets\n\n- Browse the repository.\n- Find a dataset that suits your needs.\n- Download the dataset with its detailed description and metadata file.", "## Contribution\n\nContributions are welcome! If you have suggestions or want to contribute, please open an issue or a pull request.", "## Support\n\nFor questions or support, raise an issue in the repository.\n\n\n---" ]
[ "TAGS\n#task_categories-text-classification #size_categories-100M<n<1B #region-us \n", "# URL News Dataset Repository\n\nURL", "## Introduction\n\nWelcome to the URL News Dataset Repository! This repository is created by URL and is dedicated to providing free datasets of publicly available news articles. We release new datasets weekly, each containing around 1,000 news articles focused on various themes, topics, or metadata characteristics like sentiment analysis, and top IPTC categories such as finance, sports, and politics.", "## Dataset Overview\n\n- Weekly Releases: New dataset available every week.\n- Thematic Focus: Datasets based on specific themes, topics, or metadata.\n- Rich Metadata: Includes sentiment analysis, categories, publication dates.\n- Diverse Sources: Articles from a wide range of news websites.", "## Usage\n\nThe datasets are free for academic, research, and journalistic purposes:\n\n- Data Analysis: For statistical analyses, trend identification, and pattern recognition.\n- Machine Learning: Suitable for training NLP models, sentiment analysis, etc.\n- Journalistic Research: Helps journalists in data-driven storytelling.", "## Accessing the Datasets\n\n- Browse the repository.\n- Find a dataset that suits your needs.\n- Download the dataset with its detailed description and metadata file.", "## Contribution\n\nContributions are welcome! If you have suggestions or want to contribute, please open an issue or a pull request.", "## Support\n\nFor questions or support, raise an issue in the repository.\n\n\n---" ]
8c2236bde08008b9290f4ea9b543a8028845c1f7
<p align = 'center'> <img width='150' src='./README.assets/stamp0546.png'> </p> <p align = 'center'> 🥕 </p> <p align = 'center'> 如果兔兔的仓库对你有帮助的话点个⭐喵~ </p> <p align = 'center'> If Tutu's repository is helpful to you, please give it a ⭐ meow~ </p> <p align = 'center'> もしうさぎのリポジトリが役に立った場合は、⭐をぽちっとしてくださいにゃん~ </p> <p align = 'center'> 🍉 </p> <p align = 'center'> 任何 ❓问题 / 💭思考 /💡想法 都欢迎提出!</p> <p align = 'center'> Any ❓question / 💭thought /💡idea is welcome! </p> <p align = 'center'> どんな ❓質問 / 💭考え /💡アイデア でも歓迎です! </p> --- # 简介 * [草薙寧々](https://zh.moegirl.org.cn/%E8%8D%89%E8%96%99%E5%AE%81%E5%AE%81) 干声带标签数据集 本数据集只收集了游戏内的一部分,并不是全部的宁宁干声语音,其中 nene_org.txt 是标签文件 pjsk 全部角色干声带标签数据集的话可以加QQ群:691795641,群公告里有网盘地址 # 参考 * 声源归属:草薙寧々(CV:Machico)-[「プロジェクトセカイ カラフルステージ! feat. 初音ミク」](https://pjsekai.sega.jp/) # TODO * (长期)补全宁宁语音,规范数据集格式。
MomoyamaSawa/Voice-KusanagiNene
[ "task_categories:other", "task_categories:text-to-speech", "task_categories:audio-to-audio", "size_categories:1K<n<10K", "language:ja", "license:gpl-3.0", "pjsk", "草薙寧々", "草薙宁宁", "プロセカ", "prsk", "宁宁", "region:us" ]
2024-01-29T08:17:20+00:00
{"language": ["ja"], "license": "gpl-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["other", "text-to-speech", "audio-to-audio"], "pretty_name": "\u634f\u634f", "tags": ["pjsk", "\u8349\u8599\u5be7\u3005", "\u8349\u8599\u5b81\u5b81", "\u30d7\u30ed\u30bb\u30ab", "prsk", "\u5b81\u5b81"]}
2024-01-29T10:51:51+00:00
[]
[ "ja" ]
TAGS #task_categories-other #task_categories-text-to-speech #task_categories-audio-to-audio #size_categories-1K<n<10K #language-Japanese #license-gpl-3.0 #pjsk #草薙寧々 #草薙宁宁 #プロセカ #prsk #宁宁 #region-us
<p align = 'center'> <img width='150' src='./URL </p> <p align = 'center'> </p> <p align = 'center'> 如果兔兔的仓库对你有帮助的话点个⭐喵~ </p> <p align = 'center'> If Tutu's repository is helpful to you, please give it a ⭐ meow~ </p> <p align = 'center'> もしうさぎのリポジトリが役に立った場合は、⭐をぽちっとしてくださいにゃん~ </p> <p align = 'center'> </p> <p align = 'center'> 任何 问题 / 思考 /想法 都欢迎提出!</p> <p align = 'center'> Any question / thought /idea is welcome! </p> <p align = 'center'> どんな 質問 / 考え /アイデア でも歓迎です! </p> --- # 简介 * 草薙寧々 干声带标签数据集 本数据集只收集了游戏内的一部分,并不是全部的宁宁干声语音,其中 nene_org.txt 是标签文件 pjsk 全部角色干声带标签数据集的话可以加QQ群:691795641,群公告里有网盘地址 # 参考 * 声源归属:草薙寧々(CV:Machico)-「プロジェクトセカイ カラフルステージ! feat. 初音ミク」 # TODO * (长期)补全宁宁语音,规范数据集格式。
[ "# 简介\n\n* 草薙寧々 干声带标签数据集\n\n本数据集只收集了游戏内的一部分,并不是全部的宁宁干声语音,其中 nene_org.txt 是标签文件\n\npjsk 全部角色干声带标签数据集的话可以加QQ群:691795641,群公告里有网盘地址", "# 参考\n\n* 声源归属:草薙寧々(CV:Machico)-「プロジェクトセカイ カラフルステージ! feat. 初音ミク」", "# TODO\n\n* (长期)补全宁宁语音,规范数据集格式。" ]
[ "TAGS\n#task_categories-other #task_categories-text-to-speech #task_categories-audio-to-audio #size_categories-1K<n<10K #language-Japanese #license-gpl-3.0 #pjsk #草薙寧々 #草薙宁宁 #プロセカ #prsk #宁宁 #region-us \n", "# 简介\n\n* 草薙寧々 干声带标签数据集\n\n本数据集只收集了游戏内的一部分,并不是全部的宁宁干声语音,其中 nene_org.txt 是标签文件\n\npjsk 全部角色干声带标签数据集的话可以加QQ群:691795641,群公告里有网盘地址", "# 参考\n\n* 声源归属:草薙寧々(CV:Machico)-「プロジェクトセカイ カラフルステージ! feat. 初音ミク」", "# TODO\n\n* (长期)补全宁宁语音,规范数据集格式。" ]
a67c52bf369545e98999b52b07116741e87a462c
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned10 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-ia3-pruned10](https://huggingface.co/SCE/Mistral-7B-summ-ia3-pruned10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned10", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T08:19:37.916430](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned10/blob/main/results_2024-01-29T08-19-37.916430.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5971553727831214, "acc_stderr": 0.03333970029476316, "acc_norm": 0.6015807139566308, "acc_norm_stderr": 0.03401821037155317, "mc1": 0.5324357405140759, "mc1_stderr": 0.017466632149577613, "mc2": 0.6809872081720858, "mc2_stderr": 0.015247456771921587 }, "harness|arc:challenge|25": { "acc": 0.5742320819112628, "acc_stderr": 0.01444946427886881, "acc_norm": 0.6305460750853242, "acc_norm_stderr": 0.014104578366491888 }, "harness|hellaswag|10": { "acc": 0.670583549093806, "acc_stderr": 0.004690407826933912, "acc_norm": 0.8488348934475204, "acc_norm_stderr": 0.0035747765941085063 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.04688261722621503, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621503 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5851851851851851, "acc_stderr": 0.04256193767901408, "acc_norm": 0.5851851851851851, "acc_norm_stderr": 0.04256193767901408 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.625, "acc_stderr": 0.039397364351956274, "acc_norm": 0.625, "acc_norm_stderr": 0.039397364351956274 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6528301886792452, "acc_stderr": 0.029300101705549652, "acc_norm": 0.6528301886792452, "acc_norm_stderr": 0.029300101705549652 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6666666666666666, "acc_stderr": 0.03942082639927213, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.03942082639927213 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.4, "acc_stderr": 0.04923659639173309, "acc_norm": 0.4, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956913, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956913 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5780346820809249, "acc_stderr": 0.0376574669386515, "acc_norm": 0.5780346820809249, "acc_norm_stderr": 0.0376574669386515 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5063829787234042, "acc_stderr": 0.03268335899936337, "acc_norm": 0.5063829787234042, "acc_norm_stderr": 0.03268335899936337 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4298245614035088, "acc_stderr": 0.04657047260594964, "acc_norm": 0.4298245614035088, "acc_norm_stderr": 0.04657047260594964 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5793103448275863, "acc_stderr": 0.0411391498118926, "acc_norm": 0.5793103448275863, "acc_norm_stderr": 0.0411391498118926 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3783068783068783, "acc_stderr": 0.024976954053155254, "acc_norm": 0.3783068783068783, "acc_norm_stderr": 0.024976954053155254 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3968253968253968, "acc_stderr": 0.0437588849272706, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.0437588849272706 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5903225806451613, "acc_stderr": 0.027976054915347354, "acc_norm": 0.5903225806451613, "acc_norm_stderr": 0.027976054915347354 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.49261083743842365, "acc_stderr": 0.03517603540361008, "acc_norm": 0.49261083743842365, "acc_norm_stderr": 0.03517603540361008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.61, "acc_stderr": 0.04902071300001974, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7454545454545455, "acc_stderr": 0.03401506715249039, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.03401506715249039 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7626262626262627, "acc_stderr": 0.030313710538198896, "acc_norm": 0.7626262626262627, "acc_norm_stderr": 0.030313710538198896 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8497409326424871, "acc_stderr": 0.025787723180723872, "acc_norm": 0.8497409326424871, "acc_norm_stderr": 0.025787723180723872 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.558974358974359, "acc_stderr": 0.025174048384000745, "acc_norm": 0.558974358974359, "acc_norm_stderr": 0.025174048384000745 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3148148148148148, "acc_stderr": 0.02831753349606649, "acc_norm": 0.3148148148148148, "acc_norm_stderr": 0.02831753349606649 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6596638655462185, "acc_stderr": 0.030778057422931673, "acc_norm": 0.6596638655462185, "acc_norm_stderr": 0.030778057422931673 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7853211009174312, "acc_stderr": 0.01760430414925648, "acc_norm": 0.7853211009174312, "acc_norm_stderr": 0.01760430414925648 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4537037037037037, "acc_stderr": 0.03395322726375797, "acc_norm": 0.4537037037037037, "acc_norm_stderr": 0.03395322726375797 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7450980392156863, "acc_stderr": 0.030587591351604246, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.030587591351604246 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7510548523206751, "acc_stderr": 0.028146970599422644, "acc_norm": 0.7510548523206751, "acc_norm_stderr": 0.028146970599422644 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.600896860986547, "acc_stderr": 0.03286745312567961, "acc_norm": 0.600896860986547, "acc_norm_stderr": 0.03286745312567961 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7099236641221374, "acc_stderr": 0.039800662464677665, "acc_norm": 0.7099236641221374, "acc_norm_stderr": 0.039800662464677665 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8264462809917356, "acc_stderr": 0.0345727283691767, "acc_norm": 0.8264462809917356, "acc_norm_stderr": 0.0345727283691767 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7037037037037037, "acc_stderr": 0.04414343666854933, "acc_norm": 0.7037037037037037, "acc_norm_stderr": 0.04414343666854933 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7300613496932515, "acc_stderr": 0.034878251684978906, "acc_norm": 0.7300613496932515, "acc_norm_stderr": 0.034878251684978906 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.38392857142857145, "acc_stderr": 0.04616143075028547, "acc_norm": 0.38392857142857145, "acc_norm_stderr": 0.04616143075028547 }, "harness|hendrycksTest-management|5": { "acc": 0.7281553398058253, "acc_stderr": 0.044052680241409216, "acc_norm": 0.7281553398058253, "acc_norm_stderr": 0.044052680241409216 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8675213675213675, "acc_stderr": 0.022209309073165612, "acc_norm": 0.8675213675213675, "acc_norm_stderr": 0.022209309073165612 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7752234993614304, "acc_stderr": 0.014927447101937146, "acc_norm": 0.7752234993614304, "acc_norm_stderr": 0.014927447101937146 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.684971098265896, "acc_stderr": 0.025009313790069727, "acc_norm": 0.684971098265896, "acc_norm_stderr": 0.025009313790069727 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2860335195530726, "acc_stderr": 0.015113972129062138, "acc_norm": 0.2860335195530726, "acc_norm_stderr": 0.015113972129062138 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6862745098039216, "acc_stderr": 0.02656892101545714, "acc_norm": 0.6862745098039216, "acc_norm_stderr": 0.02656892101545714 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6752411575562701, "acc_stderr": 0.026596782287697043, "acc_norm": 0.6752411575562701, "acc_norm_stderr": 0.026596782287697043 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.691358024691358, "acc_stderr": 0.025702640260603742, "acc_norm": 0.691358024691358, "acc_norm_stderr": 0.025702640260603742 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46808510638297873, "acc_stderr": 0.029766675075873866, "acc_norm": 0.46808510638297873, "acc_norm_stderr": 0.029766675075873866 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.43089960886571055, "acc_stderr": 0.012647695889547231, "acc_norm": 0.43089960886571055, "acc_norm_stderr": 0.012647695889547231 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6286764705882353, "acc_stderr": 0.02934980313976587, "acc_norm": 0.6286764705882353, "acc_norm_stderr": 0.02934980313976587 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.630718954248366, "acc_stderr": 0.01952431674486635, "acc_norm": 0.630718954248366, "acc_norm_stderr": 0.01952431674486635 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6938775510204082, "acc_stderr": 0.02950489645459596, "acc_norm": 0.6938775510204082, "acc_norm_stderr": 0.02950489645459596 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6019900497512438, "acc_stderr": 0.034611994290400135, "acc_norm": 0.6019900497512438, "acc_norm_stderr": 0.034611994290400135 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036846, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036846 }, "harness|hendrycksTest-virology|5": { "acc": 0.4879518072289157, "acc_stderr": 0.03891364495835821, "acc_norm": 0.4879518072289157, "acc_norm_stderr": 0.03891364495835821 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.02991312723236804, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.02991312723236804 }, "harness|truthfulqa:mc|0": { "mc1": 0.5324357405140759, "mc1_stderr": 0.017466632149577613, "mc2": 0.6809872081720858, "mc2_stderr": 0.015247456771921587 }, "harness|winogrande|5": { "acc": 0.7750591949486977, "acc_stderr": 0.011735043564126734 }, "harness|gsm8k|5": { "acc": 0.39272175890826383, "acc_stderr": 0.013451745349586573 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned10
[ "region:us" ]
2024-01-29T08:21:58+00:00
{"pretty_name": "Evaluation run of SCE/Mistral-7B-summ-ia3-pruned10", "dataset_summary": "Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-ia3-pruned10](https://huggingface.co/SCE/Mistral-7B-summ-ia3-pruned10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned10\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T08:19:37.916430](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned10/blob/main/results_2024-01-29T08-19-37.916430.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5971553727831214,\n \"acc_stderr\": 0.03333970029476316,\n \"acc_norm\": 0.6015807139566308,\n \"acc_norm_stderr\": 0.03401821037155317,\n \"mc1\": 0.5324357405140759,\n \"mc1_stderr\": 0.017466632149577613,\n \"mc2\": 0.6809872081720858,\n \"mc2_stderr\": 0.015247456771921587\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5742320819112628,\n \"acc_stderr\": 0.01444946427886881,\n \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491888\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.670583549093806,\n \"acc_stderr\": 0.004690407826933912,\n \"acc_norm\": 0.8488348934475204,\n \"acc_norm_stderr\": 0.0035747765941085063\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621503,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621503\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6528301886792452,\n \"acc_stderr\": 0.029300101705549652,\n \"acc_norm\": 0.6528301886792452,\n \"acc_norm_stderr\": 0.029300101705549652\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03942082639927213,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03942082639927213\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5780346820809249,\n \"acc_stderr\": 0.0376574669386515,\n \"acc_norm\": 0.5780346820809249,\n \"acc_norm_stderr\": 0.0376574669386515\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5063829787234042,\n \"acc_stderr\": 0.03268335899936337,\n \"acc_norm\": 0.5063829787234042,\n \"acc_norm_stderr\": 0.03268335899936337\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.04657047260594964,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.04657047260594964\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3783068783068783,\n \"acc_stderr\": 0.024976954053155254,\n \"acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.024976954053155254\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.0437588849272706,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.0437588849272706\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5903225806451613,\n \"acc_stderr\": 0.027976054915347354,\n \"acc_norm\": 0.5903225806451613,\n \"acc_norm_stderr\": 0.027976054915347354\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.025787723180723872,\n \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.025787723180723872\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.558974358974359,\n \"acc_stderr\": 0.025174048384000745,\n \"acc_norm\": 0.558974358974359,\n \"acc_norm_stderr\": 0.025174048384000745\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606649,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606649\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7853211009174312,\n \"acc_stderr\": 0.01760430414925648,\n \"acc_norm\": 0.7853211009174312,\n \"acc_norm_stderr\": 0.01760430414925648\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4537037037037037,\n \"acc_stderr\": 0.03395322726375797,\n \"acc_norm\": 0.4537037037037037,\n \"acc_norm_stderr\": 0.03395322726375797\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.030587591351604246,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.030587591351604246\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.600896860986547,\n \"acc_stderr\": 0.03286745312567961,\n \"acc_norm\": 0.600896860986547,\n \"acc_norm_stderr\": 0.03286745312567961\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7099236641221374,\n \"acc_stderr\": 0.039800662464677665,\n \"acc_norm\": 0.7099236641221374,\n \"acc_norm_stderr\": 0.039800662464677665\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8264462809917356,\n \"acc_stderr\": 0.0345727283691767,\n \"acc_norm\": 0.8264462809917356,\n \"acc_norm_stderr\": 0.0345727283691767\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.04414343666854933,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.04414343666854933\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165612,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165612\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7752234993614304,\n \"acc_stderr\": 0.014927447101937146,\n \"acc_norm\": 0.7752234993614304,\n \"acc_norm_stderr\": 0.014927447101937146\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069727,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069727\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2860335195530726,\n \"acc_stderr\": 0.015113972129062138,\n \"acc_norm\": 0.2860335195530726,\n \"acc_norm_stderr\": 0.015113972129062138\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.02656892101545714,\n \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.02656892101545714\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.691358024691358,\n \"acc_stderr\": 0.025702640260603742,\n \"acc_norm\": 0.691358024691358,\n \"acc_norm_stderr\": 0.025702640260603742\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.43089960886571055,\n \"acc_stderr\": 0.012647695889547231,\n \"acc_norm\": 0.43089960886571055,\n \"acc_norm_stderr\": 0.012647695889547231\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6286764705882353,\n \"acc_stderr\": 0.02934980313976587,\n \"acc_norm\": 0.6286764705882353,\n \"acc_norm_stderr\": 0.02934980313976587\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.630718954248366,\n \"acc_stderr\": 0.01952431674486635,\n \"acc_norm\": 0.630718954248366,\n \"acc_norm_stderr\": 0.01952431674486635\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.02950489645459596,\n \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.02950489645459596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6019900497512438,\n \"acc_stderr\": 0.034611994290400135,\n \"acc_norm\": 0.6019900497512438,\n \"acc_norm_stderr\": 0.034611994290400135\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5324357405140759,\n \"mc1_stderr\": 0.017466632149577613,\n \"mc2\": 0.6809872081720858,\n \"mc2_stderr\": 0.015247456771921587\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7750591949486977,\n \"acc_stderr\": 0.011735043564126734\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.39272175890826383,\n \"acc_stderr\": 0.013451745349586573\n }\n}\n```", "repo_url": "https://huggingface.co/SCE/Mistral-7B-summ-ia3-pruned10", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-19-37.916430.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["**/details_harness|winogrande|5_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T08-19-37.916430.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T08_19_37.916430", "path": ["results_2024-01-29T08-19-37.916430.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T08-19-37.916430.parquet"]}]}]}
2024-01-29T08:22:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned10 Dataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-pruned10 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T08:19:37.916430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned10\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-pruned10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:19:37.916430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned10\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-pruned10 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:19:37.916430(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
73168a13d8d88b06e0622843b3e61c40af12aed7
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned20 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-ia3-pruned20](https://huggingface.co/SCE/Mistral-7B-summ-ia3-pruned20) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned20", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T08:25:43.679587](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned20/blob/main/results_2024-01-29T08-25-43.679587.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6008987872395051, "acc_stderr": 0.03325115836537348, "acc_norm": 0.6055605429668626, "acc_norm_stderr": 0.03392591539172697, "mc1": 0.5385556915544676, "mc1_stderr": 0.017451384104637455, "mc2": 0.678373098818418, "mc2_stderr": 0.01534656228604094 }, "harness|arc:challenge|25": { "acc": 0.575938566552901, "acc_stderr": 0.014441889627464392, "acc_norm": 0.628839590443686, "acc_norm_stderr": 0.014117971901142824 }, "harness|hellaswag|10": { "acc": 0.6696873132842064, "acc_stderr": 0.00469364435720205, "acc_norm": 0.8477394941246763, "acc_norm_stderr": 0.003585389636472375 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6, "acc_stderr": 0.04232073695151589, "acc_norm": 0.6, "acc_norm_stderr": 0.04232073695151589 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6447368421052632, "acc_stderr": 0.038947344870133176, "acc_norm": 0.6447368421052632, "acc_norm_stderr": 0.038947344870133176 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.61, "acc_stderr": 0.04902071300001975, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.02898545565233439, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.02898545565233439 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6805555555555556, "acc_stderr": 0.038990736873573344, "acc_norm": 0.6805555555555556, "acc_norm_stderr": 0.038990736873573344 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.049888765156985884, "acc_norm": 0.56, "acc_norm_stderr": 0.049888765156985884 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5664739884393064, "acc_stderr": 0.03778621079092056, "acc_norm": 0.5664739884393064, "acc_norm_stderr": 0.03778621079092056 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.49361702127659574, "acc_stderr": 0.03268335899936336, "acc_norm": 0.49361702127659574, "acc_norm_stderr": 0.03268335899936336 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.41228070175438597, "acc_stderr": 0.046306532033665956, "acc_norm": 0.41228070175438597, "acc_norm_stderr": 0.046306532033665956 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6137931034482759, "acc_stderr": 0.04057324734419035, "acc_norm": 0.6137931034482759, "acc_norm_stderr": 0.04057324734419035 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3835978835978836, "acc_stderr": 0.025043757318520193, "acc_norm": 0.3835978835978836, "acc_norm_stderr": 0.025043757318520193 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.38095238095238093, "acc_stderr": 0.04343525428949098, "acc_norm": 0.38095238095238093, "acc_norm_stderr": 0.04343525428949098 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6290322580645161, "acc_stderr": 0.027480541887953593, "acc_norm": 0.6290322580645161, "acc_norm_stderr": 0.027480541887953593 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.46798029556650245, "acc_stderr": 0.03510766597959217, "acc_norm": 0.46798029556650245, "acc_norm_stderr": 0.03510766597959217 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145632, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7454545454545455, "acc_stderr": 0.03401506715249039, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.03401506715249039 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7575757575757576, "acc_stderr": 0.030532892233932022, "acc_norm": 0.7575757575757576, "acc_norm_stderr": 0.030532892233932022 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8497409326424871, "acc_stderr": 0.02578772318072388, "acc_norm": 0.8497409326424871, "acc_norm_stderr": 0.02578772318072388 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5487179487179488, "acc_stderr": 0.025230381238934837, "acc_norm": 0.5487179487179488, "acc_norm_stderr": 0.025230381238934837 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.29259259259259257, "acc_stderr": 0.027738969632176085, "acc_norm": 0.29259259259259257, "acc_norm_stderr": 0.027738969632176085 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6512605042016807, "acc_stderr": 0.030956636328566548, "acc_norm": 0.6512605042016807, "acc_norm_stderr": 0.030956636328566548 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.038796870240733264, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.038796870240733264 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7871559633027523, "acc_stderr": 0.017549376389313694, "acc_norm": 0.7871559633027523, "acc_norm_stderr": 0.017549376389313694 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4398148148148148, "acc_stderr": 0.03385177976044811, "acc_norm": 0.4398148148148148, "acc_norm_stderr": 0.03385177976044811 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7205882352941176, "acc_stderr": 0.03149328104507956, "acc_norm": 0.7205882352941176, "acc_norm_stderr": 0.03149328104507956 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7468354430379747, "acc_stderr": 0.028304657943035303, "acc_norm": 0.7468354430379747, "acc_norm_stderr": 0.028304657943035303 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6053811659192825, "acc_stderr": 0.03280400504755291, "acc_norm": 0.6053811659192825, "acc_norm_stderr": 0.03280400504755291 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.03915345408847836, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.03915345408847836 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6944444444444444, "acc_stderr": 0.04453197507374984, "acc_norm": 0.6944444444444444, "acc_norm_stderr": 0.04453197507374984 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7300613496932515, "acc_stderr": 0.03487825168497892, "acc_norm": 0.7300613496932515, "acc_norm_stderr": 0.03487825168497892 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4107142857142857, "acc_stderr": 0.04669510663875191, "acc_norm": 0.4107142857142857, "acc_norm_stderr": 0.04669510663875191 }, "harness|hendrycksTest-management|5": { "acc": 0.7378640776699029, "acc_stderr": 0.04354631077260595, "acc_norm": 0.7378640776699029, "acc_norm_stderr": 0.04354631077260595 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.022801382534597556, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.022801382534597556 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7675606641123882, "acc_stderr": 0.015104550008905716, "acc_norm": 0.7675606641123882, "acc_norm_stderr": 0.015104550008905716 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6820809248554913, "acc_stderr": 0.025070713719153193, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.025070713719153193 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3016759776536313, "acc_stderr": 0.015350767572220285, "acc_norm": 0.3016759776536313, "acc_norm_stderr": 0.015350767572220285 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6830065359477124, "acc_stderr": 0.026643278474508755, "acc_norm": 0.6830065359477124, "acc_norm_stderr": 0.026643278474508755 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6945337620578779, "acc_stderr": 0.026160584450140453, "acc_norm": 0.6945337620578779, "acc_norm_stderr": 0.026160584450140453 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7067901234567902, "acc_stderr": 0.025329888171900926, "acc_norm": 0.7067901234567902, "acc_norm_stderr": 0.025329888171900926 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4574468085106383, "acc_stderr": 0.02971928127223685, "acc_norm": 0.4574468085106383, "acc_norm_stderr": 0.02971928127223685 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4282920469361147, "acc_stderr": 0.012638223880313168, "acc_norm": 0.4282920469361147, "acc_norm_stderr": 0.012638223880313168 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6397058823529411, "acc_stderr": 0.029163128570670733, "acc_norm": 0.6397058823529411, "acc_norm_stderr": 0.029163128570670733 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6225490196078431, "acc_stderr": 0.01961085147488029, "acc_norm": 0.6225490196078431, "acc_norm_stderr": 0.01961085147488029 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7363636363636363, "acc_stderr": 0.04220224692971987, "acc_norm": 0.7363636363636363, "acc_norm_stderr": 0.04220224692971987 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6979591836734694, "acc_stderr": 0.0293936093198798, "acc_norm": 0.6979591836734694, "acc_norm_stderr": 0.0293936093198798 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7064676616915423, "acc_stderr": 0.03220024104534205, "acc_norm": 0.7064676616915423, "acc_norm_stderr": 0.03220024104534205 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036846, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036846 }, "harness|hendrycksTest-virology|5": { "acc": 0.4759036144578313, "acc_stderr": 0.03887971849597264, "acc_norm": 0.4759036144578313, "acc_norm_stderr": 0.03887971849597264 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.5385556915544676, "mc1_stderr": 0.017451384104637455, "mc2": 0.678373098818418, "mc2_stderr": 0.01534656228604094 }, "harness|winogrande|5": { "acc": 0.7695343330702447, "acc_stderr": 0.011835872164836682 }, "harness|gsm8k|5": { "acc": 0.38817285822592873, "acc_stderr": 0.013423607564002744 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned20
[ "region:us" ]
2024-01-29T08:28:04+00:00
{"pretty_name": "Evaluation run of SCE/Mistral-7B-summ-ia3-pruned20", "dataset_summary": "Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-ia3-pruned20](https://huggingface.co/SCE/Mistral-7B-summ-ia3-pruned20) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned20\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T08:25:43.679587](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-ia3-pruned20/blob/main/results_2024-01-29T08-25-43.679587.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6008987872395051,\n \"acc_stderr\": 0.03325115836537348,\n \"acc_norm\": 0.6055605429668626,\n \"acc_norm_stderr\": 0.03392591539172697,\n \"mc1\": 0.5385556915544676,\n \"mc1_stderr\": 0.017451384104637455,\n \"mc2\": 0.678373098818418,\n \"mc2_stderr\": 0.01534656228604094\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.575938566552901,\n \"acc_stderr\": 0.014441889627464392,\n \"acc_norm\": 0.628839590443686,\n \"acc_norm_stderr\": 0.014117971901142824\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6696873132842064,\n \"acc_stderr\": 0.00469364435720205,\n \"acc_norm\": 0.8477394941246763,\n \"acc_norm_stderr\": 0.003585389636472375\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6447368421052632,\n \"acc_stderr\": 0.038947344870133176,\n \"acc_norm\": 0.6447368421052632,\n \"acc_norm_stderr\": 0.038947344870133176\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5664739884393064,\n \"acc_stderr\": 0.03778621079092056,\n \"acc_norm\": 0.5664739884393064,\n \"acc_norm_stderr\": 0.03778621079092056\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.49361702127659574,\n \"acc_stderr\": 0.03268335899936336,\n \"acc_norm\": 0.49361702127659574,\n \"acc_norm_stderr\": 0.03268335899936336\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n \"acc_stderr\": 0.046306532033665956,\n \"acc_norm\": 0.41228070175438597,\n \"acc_norm_stderr\": 0.046306532033665956\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419035,\n \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419035\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3835978835978836,\n \"acc_stderr\": 0.025043757318520193,\n \"acc_norm\": 0.3835978835978836,\n \"acc_norm_stderr\": 0.025043757318520193\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.04343525428949098,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.04343525428949098\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6290322580645161,\n \"acc_stderr\": 0.027480541887953593,\n \"acc_norm\": 0.6290322580645161,\n \"acc_norm_stderr\": 0.027480541887953593\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.03510766597959217,\n \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.03510766597959217\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932022,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932022\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.02578772318072388,\n \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.02578772318072388\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5487179487179488,\n \"acc_stderr\": 0.025230381238934837,\n \"acc_norm\": 0.5487179487179488,\n \"acc_norm_stderr\": 0.025230381238934837\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.29259259259259257,\n \"acc_stderr\": 0.027738969632176085,\n \"acc_norm\": 0.29259259259259257,\n \"acc_norm_stderr\": 0.027738969632176085\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566548,\n \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566548\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7871559633027523,\n \"acc_stderr\": 0.017549376389313694,\n \"acc_norm\": 0.7871559633027523,\n \"acc_norm_stderr\": 0.017549376389313694\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4398148148148148,\n \"acc_stderr\": 0.03385177976044811,\n \"acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.03385177976044811\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7205882352941176,\n \"acc_stderr\": 0.03149328104507956,\n \"acc_norm\": 0.7205882352941176,\n \"acc_norm_stderr\": 0.03149328104507956\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7468354430379747,\n \"acc_stderr\": 0.028304657943035303,\n \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.028304657943035303\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6053811659192825,\n \"acc_stderr\": 0.03280400504755291,\n \"acc_norm\": 0.6053811659192825,\n \"acc_norm_stderr\": 0.03280400504755291\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847836,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847836\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.04453197507374984,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.04453197507374984\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597556,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597556\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7675606641123882,\n \"acc_stderr\": 0.015104550008905716,\n \"acc_norm\": 0.7675606641123882,\n \"acc_norm_stderr\": 0.015104550008905716\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.025070713719153193,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.025070713719153193\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3016759776536313,\n \"acc_stderr\": 0.015350767572220285,\n \"acc_norm\": 0.3016759776536313,\n \"acc_norm_stderr\": 0.015350767572220285\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.026643278474508755,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.026643278474508755\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.026160584450140453,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.026160584450140453\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7067901234567902,\n \"acc_stderr\": 0.025329888171900926,\n \"acc_norm\": 0.7067901234567902,\n \"acc_norm_stderr\": 0.025329888171900926\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4574468085106383,\n \"acc_stderr\": 0.02971928127223685,\n \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.02971928127223685\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4282920469361147,\n \"acc_stderr\": 0.012638223880313168,\n \"acc_norm\": 0.4282920469361147,\n \"acc_norm_stderr\": 0.012638223880313168\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6397058823529411,\n \"acc_stderr\": 0.029163128570670733,\n \"acc_norm\": 0.6397058823529411,\n \"acc_norm_stderr\": 0.029163128570670733\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6225490196078431,\n \"acc_stderr\": 0.01961085147488029,\n \"acc_norm\": 0.6225490196078431,\n \"acc_norm_stderr\": 0.01961085147488029\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.0293936093198798,\n \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.0293936093198798\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7064676616915423,\n \"acc_stderr\": 0.03220024104534205,\n \"acc_norm\": 0.7064676616915423,\n \"acc_norm_stderr\": 0.03220024104534205\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4759036144578313,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.4759036144578313,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5385556915544676,\n \"mc1_stderr\": 0.017451384104637455,\n \"mc2\": 0.678373098818418,\n \"mc2_stderr\": 0.01534656228604094\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7695343330702447,\n \"acc_stderr\": 0.011835872164836682\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.38817285822592873,\n \"acc_stderr\": 0.013423607564002744\n }\n}\n```", "repo_url": "https://huggingface.co/SCE/Mistral-7B-summ-ia3-pruned20", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-25-43.679587.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["**/details_harness|winogrande|5_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T08-25-43.679587.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T08_25_43.679587", "path": ["results_2024-01-29T08-25-43.679587.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T08-25-43.679587.parquet"]}]}]}
2024-01-29T08:28:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned20 Dataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-pruned20 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T08:25:43.679587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned20\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-pruned20 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:25:43.679587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-ia3-pruned20\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-ia3-pruned20 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:25:43.679587(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
8d5f712b46a120d008d5d9babd8c0f5ef76cf15b
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-lora-tuned <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-lora-tuned](https://huggingface.co/SCE/Mistral-7B-summ-lora-tuned) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SCE__Mistral-7B-summ-lora-tuned", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T08:32:02.958349](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-lora-tuned/blob/main/results_2024-01-29T08-32-02.958349.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6057599078213342, "acc_stderr": 0.033182045074560504, "acc_norm": 0.6103243256357017, "acc_norm_stderr": 0.03385586534296844, "mc1": 0.5471236230110159, "mc1_stderr": 0.01742558984831402, "mc2": 0.7018312697546788, "mc2_stderr": 0.015073530920635063 }, "harness|arc:challenge|25": { "acc": 0.5878839590443686, "acc_stderr": 0.014383915302225407, "acc_norm": 0.6279863481228669, "acc_norm_stderr": 0.01412459788184446 }, "harness|hellaswag|10": { "acc": 0.6741684923322048, "acc_stderr": 0.0046772682828393995, "acc_norm": 0.8519219279028082, "acc_norm_stderr": 0.0035445147718340926 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.046482319871173156, "acc_norm": 0.31, "acc_norm_stderr": 0.046482319871173156 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5851851851851851, "acc_stderr": 0.04256193767901408, "acc_norm": 0.5851851851851851, "acc_norm_stderr": 0.04256193767901408 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.625, "acc_stderr": 0.039397364351956274, "acc_norm": 0.625, "acc_norm_stderr": 0.039397364351956274 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.61, "acc_stderr": 0.04902071300001974, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6754716981132075, "acc_stderr": 0.02881561571343211, "acc_norm": 0.6754716981132075, "acc_norm_stderr": 0.02881561571343211 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6805555555555556, "acc_stderr": 0.038990736873573344, "acc_norm": 0.6805555555555556, "acc_norm_stderr": 0.038990736873573344 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5838150289017341, "acc_stderr": 0.03758517775404947, "acc_norm": 0.5838150289017341, "acc_norm_stderr": 0.03758517775404947 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5276595744680851, "acc_stderr": 0.03263597118409769, "acc_norm": 0.5276595744680851, "acc_norm_stderr": 0.03263597118409769 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4298245614035088, "acc_stderr": 0.04657047260594963, "acc_norm": 0.4298245614035088, "acc_norm_stderr": 0.04657047260594963 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6068965517241379, "acc_stderr": 0.0407032901370707, "acc_norm": 0.6068965517241379, "acc_norm_stderr": 0.0407032901370707 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3783068783068783, "acc_stderr": 0.02497695405315525, "acc_norm": 0.3783068783068783, "acc_norm_stderr": 0.02497695405315525 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.603225806451613, "acc_stderr": 0.027831231605767948, "acc_norm": 0.603225806451613, "acc_norm_stderr": 0.027831231605767948 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5172413793103449, "acc_stderr": 0.035158955511656986, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.035158955511656986 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.61, "acc_stderr": 0.04902071300001974, "acc_norm": 0.61, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7454545454545455, "acc_stderr": 0.03401506715249039, "acc_norm": 0.7454545454545455, "acc_norm_stderr": 0.03401506715249039 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7474747474747475, "acc_stderr": 0.03095405547036589, "acc_norm": 0.7474747474747475, "acc_norm_stderr": 0.03095405547036589 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.844559585492228, "acc_stderr": 0.026148483469153303, "acc_norm": 0.844559585492228, "acc_norm_stderr": 0.026148483469153303 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5692307692307692, "acc_stderr": 0.025106820660539753, "acc_norm": 0.5692307692307692, "acc_norm_stderr": 0.025106820660539753 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3, "acc_stderr": 0.02794045713622839, "acc_norm": 0.3, "acc_norm_stderr": 0.02794045713622839 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6722689075630253, "acc_stderr": 0.030489911417673227, "acc_norm": 0.6722689075630253, "acc_norm_stderr": 0.030489911417673227 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7926605504587156, "acc_stderr": 0.017381415563608674, "acc_norm": 0.7926605504587156, "acc_norm_stderr": 0.017381415563608674 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4398148148148148, "acc_stderr": 0.03385177976044812, "acc_norm": 0.4398148148148148, "acc_norm_stderr": 0.03385177976044812 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7843137254901961, "acc_stderr": 0.028867431449849316, "acc_norm": 0.7843137254901961, "acc_norm_stderr": 0.028867431449849316 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.759493670886076, "acc_stderr": 0.02782078198114969, "acc_norm": 0.759493670886076, "acc_norm_stderr": 0.02782078198114969 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6143497757847534, "acc_stderr": 0.03266842214289201, "acc_norm": 0.6143497757847534, "acc_norm_stderr": 0.03266842214289201 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.03915345408847836, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.03915345408847836 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7129629629629629, "acc_stderr": 0.04373313040914761, "acc_norm": 0.7129629629629629, "acc_norm_stderr": 0.04373313040914761 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.7669902912621359, "acc_stderr": 0.04185832598928315, "acc_norm": 0.7669902912621359, "acc_norm_stderr": 0.04185832598928315 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.022801382534597552, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.022801382534597552 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7713920817369093, "acc_stderr": 0.015016884698539878, "acc_norm": 0.7713920817369093, "acc_norm_stderr": 0.015016884698539878 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6878612716763006, "acc_stderr": 0.024946792225272314, "acc_norm": 0.6878612716763006, "acc_norm_stderr": 0.024946792225272314 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2871508379888268, "acc_stderr": 0.015131608849963766, "acc_norm": 0.2871508379888268, "acc_norm_stderr": 0.015131608849963766 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6797385620915033, "acc_stderr": 0.026716118380156854, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.026716118380156854 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6881028938906752, "acc_stderr": 0.02631185807185416, "acc_norm": 0.6881028938906752, "acc_norm_stderr": 0.02631185807185416 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6975308641975309, "acc_stderr": 0.025557653981868045, "acc_norm": 0.6975308641975309, "acc_norm_stderr": 0.025557653981868045 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4645390070921986, "acc_stderr": 0.02975238965742705, "acc_norm": 0.4645390070921986, "acc_norm_stderr": 0.02975238965742705 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4302477183833116, "acc_stderr": 0.012645361435115233, "acc_norm": 0.4302477183833116, "acc_norm_stderr": 0.012645361435115233 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.625, "acc_stderr": 0.029408372932278746, "acc_norm": 0.625, "acc_norm_stderr": 0.029408372932278746 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.630718954248366, "acc_stderr": 0.019524316744866353, "acc_norm": 0.630718954248366, "acc_norm_stderr": 0.019524316744866353 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7181818181818181, "acc_stderr": 0.043091187099464585, "acc_norm": 0.7181818181818181, "acc_norm_stderr": 0.043091187099464585 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7061224489795919, "acc_stderr": 0.02916273841024977, "acc_norm": 0.7061224489795919, "acc_norm_stderr": 0.02916273841024977 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6766169154228856, "acc_stderr": 0.03307615947979033, "acc_norm": 0.6766169154228856, "acc_norm_stderr": 0.03307615947979033 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.82, "acc_stderr": 0.03861229196653693, "acc_norm": 0.82, "acc_norm_stderr": 0.03861229196653693 }, "harness|hendrycksTest-virology|5": { "acc": 0.4939759036144578, "acc_stderr": 0.03892212195333047, "acc_norm": 0.4939759036144578, "acc_norm_stderr": 0.03892212195333047 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8245614035087719, "acc_stderr": 0.02917088550072767, "acc_norm": 0.8245614035087719, "acc_norm_stderr": 0.02917088550072767 }, "harness|truthfulqa:mc|0": { "mc1": 0.5471236230110159, "mc1_stderr": 0.01742558984831402, "mc2": 0.7018312697546788, "mc2_stderr": 0.015073530920635063 }, "harness|winogrande|5": { "acc": 0.7719021310181531, "acc_stderr": 0.011793015817663597 }, "harness|gsm8k|5": { "acc": 0.38817285822592873, "acc_stderr": 0.013423607564002751 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SCE__Mistral-7B-summ-lora-tuned
[ "region:us" ]
2024-01-29T08:34:23+00:00
{"pretty_name": "Evaluation run of SCE/Mistral-7B-summ-lora-tuned", "dataset_summary": "Dataset automatically created during the evaluation run of model [SCE/Mistral-7B-summ-lora-tuned](https://huggingface.co/SCE/Mistral-7B-summ-lora-tuned) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SCE__Mistral-7B-summ-lora-tuned\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T08:32:02.958349](https://huggingface.co/datasets/open-llm-leaderboard/details_SCE__Mistral-7B-summ-lora-tuned/blob/main/results_2024-01-29T08-32-02.958349.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6057599078213342,\n \"acc_stderr\": 0.033182045074560504,\n \"acc_norm\": 0.6103243256357017,\n \"acc_norm_stderr\": 0.03385586534296844,\n \"mc1\": 0.5471236230110159,\n \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.7018312697546788,\n \"mc2_stderr\": 0.015073530920635063\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5878839590443686,\n \"acc_stderr\": 0.014383915302225407,\n \"acc_norm\": 0.6279863481228669,\n \"acc_norm_stderr\": 0.01412459788184446\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6741684923322048,\n \"acc_stderr\": 0.0046772682828393995,\n \"acc_norm\": 0.8519219279028082,\n \"acc_norm_stderr\": 0.0035445147718340926\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.046482319871173156,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.046482319871173156\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5838150289017341,\n \"acc_stderr\": 0.03758517775404947,\n \"acc_norm\": 0.5838150289017341,\n \"acc_norm_stderr\": 0.03758517775404947\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6068965517241379,\n \"acc_stderr\": 0.0407032901370707,\n \"acc_norm\": 0.6068965517241379,\n \"acc_norm_stderr\": 0.0407032901370707\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3783068783068783,\n \"acc_stderr\": 0.02497695405315525,\n \"acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.02497695405315525\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.603225806451613,\n \"acc_stderr\": 0.027831231605767948,\n \"acc_norm\": 0.603225806451613,\n \"acc_norm_stderr\": 0.027831231605767948\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7474747474747475,\n \"acc_stderr\": 0.03095405547036589,\n \"acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.03095405547036589\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.844559585492228,\n \"acc_stderr\": 0.026148483469153303,\n \"acc_norm\": 0.844559585492228,\n \"acc_norm_stderr\": 0.026148483469153303\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5692307692307692,\n \"acc_stderr\": 0.025106820660539753,\n \"acc_norm\": 0.5692307692307692,\n \"acc_norm_stderr\": 0.025106820660539753\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.02794045713622839,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.02794045713622839\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.030489911417673227,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.030489911417673227\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7926605504587156,\n \"acc_stderr\": 0.017381415563608674,\n \"acc_norm\": 0.7926605504587156,\n \"acc_norm_stderr\": 0.017381415563608674\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4398148148148148,\n \"acc_stderr\": 0.03385177976044812,\n \"acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.03385177976044812\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.759493670886076,\n \"acc_stderr\": 0.02782078198114969,\n \"acc_norm\": 0.759493670886076,\n \"acc_norm_stderr\": 0.02782078198114969\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6143497757847534,\n \"acc_stderr\": 0.03266842214289201,\n \"acc_norm\": 0.6143497757847534,\n \"acc_norm_stderr\": 0.03266842214289201\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.03915345408847836,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.03915345408847836\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.04373313040914761,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.04373313040914761\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597552,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597552\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7713920817369093,\n \"acc_stderr\": 0.015016884698539878,\n \"acc_norm\": 0.7713920817369093,\n \"acc_norm_stderr\": 0.015016884698539878\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.024946792225272314,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.024946792225272314\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2871508379888268,\n \"acc_stderr\": 0.015131608849963766,\n \"acc_norm\": 0.2871508379888268,\n \"acc_norm_stderr\": 0.015131608849963766\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.026716118380156854,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.026716118380156854\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n \"acc_stderr\": 0.02631185807185416,\n \"acc_norm\": 0.6881028938906752,\n \"acc_norm_stderr\": 0.02631185807185416\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6975308641975309,\n \"acc_stderr\": 0.025557653981868045,\n \"acc_norm\": 0.6975308641975309,\n \"acc_norm_stderr\": 0.025557653981868045\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4645390070921986,\n \"acc_stderr\": 0.02975238965742705,\n \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.02975238965742705\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4302477183833116,\n \"acc_stderr\": 0.012645361435115233,\n \"acc_norm\": 0.4302477183833116,\n \"acc_norm_stderr\": 0.012645361435115233\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.029408372932278746,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.029408372932278746\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.630718954248366,\n \"acc_stderr\": 0.019524316744866353,\n \"acc_norm\": 0.630718954248366,\n \"acc_norm_stderr\": 0.019524316744866353\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024977,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024977\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6766169154228856,\n \"acc_stderr\": 0.03307615947979033,\n \"acc_norm\": 0.6766169154228856,\n \"acc_norm_stderr\": 0.03307615947979033\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653693,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653693\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333047,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333047\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5471236230110159,\n \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.7018312697546788,\n \"mc2_stderr\": 0.015073530920635063\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663597\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.38817285822592873,\n \"acc_stderr\": 0.013423607564002751\n }\n}\n```", "repo_url": "https://huggingface.co/SCE/Mistral-7B-summ-lora-tuned", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T08-32-02.958349.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["**/details_harness|winogrande|5_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T08-32-02.958349.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T08_32_02.958349", "path": ["results_2024-01-29T08-32-02.958349.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T08-32-02.958349.parquet"]}]}]}
2024-01-29T08:34:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-lora-tuned Dataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-lora-tuned on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T08:32:02.958349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-lora-tuned\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-lora-tuned on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:32:02.958349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SCE/Mistral-7B-summ-lora-tuned\n\n\n\nDataset automatically created during the evaluation run of model SCE/Mistral-7B-summ-lora-tuned on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T08:32:02.958349(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
56aff8b052db2f350d0baba23a1d51fb2ab74954
# Dataset Card for ESSAI ## Dataset Description - **Homepage:** https://clementdalloux.fr/?page_id=28 - **Pubmed:** False - **Public:** False - **Tasks:** TXTCLASS We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently. This version only contain the annotated ESSAI corpus ## Citation Information ``` @misc{dalloux, title={Datasets – Clément Dalloux}, url={http://clementdalloux.fr/?page_id=28}, journal={Clément Dalloux}, author={Dalloux, Clément}} ```
asus-aics/essai
[ "multilinguality:monolingual", "language:fr", "license:other", "region:us" ]
2024-01-29T09:04:44+00:00
{"language": ["fr"], "license": "other", "multilinguality": "monolingual", "pretty_name": "ESSAI", "bigbio_language": ["French"], "bigbio_license_shortname": "DUA", "homepage": "https://clementdalloux.fr/?page_id=28", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]}
2024-01-29T09:11:03+00:00
[]
[ "fr" ]
TAGS #multilinguality-monolingual #language-French #license-other #region-us
# Dataset Card for ESSAI ## Dataset Description - Homepage: URL - Pubmed: False - Public: False - Tasks: TXTCLASS We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently. This version only contain the annotated ESSAI corpus
[ "# Dataset Card for ESSAI", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nWe manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.\n\nThis version only contain the annotated ESSAI corpus" ]
[ "TAGS\n#multilinguality-monolingual #language-French #license-other #region-us \n", "# Dataset Card for ESSAI", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nWe manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.\n\nThis version only contain the annotated ESSAI corpus" ]
a2aa3f33f6d98552c1e6615b50de138e222558f1
This dataset is contains json formatted in TRL's [conversational format](https://huggingface.co/docs/trl/main/en/sft_trainer#dataset-format-support) as well as a chatml formatted text field.
gardner/SlimOrca-Dedup-trl-conversational-chatml
[ "task_categories:text-generation", "task_categories:conversational", "language:en", "license:mit", "chatml", "trl", "conversational", "region:us" ]
2024-01-29T09:17:56+00:00
{"language": ["en"], "license": "mit", "task_categories": ["text-generation", "conversational"], "dataset_info": {"features": [{"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}, {"name": "chatml", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1220288632, "num_examples": 363491}], "download_size": 617604809, "dataset_size": 1220288632}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["chatml", "trl", "conversational"]}
2024-01-29T09:22:55+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-conversational #language-English #license-mit #chatml #trl #conversational #region-us
This dataset is contains json formatted in TRL's conversational format as well as a chatml formatted text field.
[]
[ "TAGS\n#task_categories-text-generation #task_categories-conversational #language-English #license-mit #chatml #trl #conversational #region-us \n" ]
b219303e8e477ac1aa6290e286a25f883d91dcbf
# Dataset Card for NTCIR-13 MedWeb ## Dataset Description - **Homepage:** http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html - **Pubmed:** False - **Public:** False - **Tasks:** TRANSL,TXTCLASS NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires to perform a multi-label classification that labels for eight diseases/symptoms must be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n labels for eight diseases/symptoms. The achievements of this task can almost be directly applied to a fundamental engine for actual applications. This task provides pseudo-Twitter messages in a cross-language and multi-label corpus, covering three languages (Japanese, English, and Chinese), and annotated with eight labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache, fever, runny nose, and cold. For more information, see: http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html As this dataset also provides a parallel corpus of pseudo-tweets for english, japanese and chinese it can also be used to train translation models between these three languages. ## Citation Information ``` @article{wakamiya2017overview, author = {Shoko Wakamiya, Mizuki Morita, Yoshinobu Kano, Tomoko Ohkuma and Eiji Aramaki}, title = {Overview of the NTCIR-13 MedWeb Task}, journal = {Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies (NTCIR-13)}, year = {2017}, url = { http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings13/pdf/ntcir/01-NTCIR13-OV-MEDWEB-WakamiyaS.pdf }, } ```
asus-aics/ntcir_13_medweb
[ "multilinguality:multilingual", "language:en", "language:zh", "language:ja", "license:cc-by-4.0", "region:us" ]
2024-01-29T09:18:05+00:00
{"language": ["en", "zh", "ja"], "license": "cc-by-4.0", "multilinguality": "multilingual", "pretty_name": "NTCIR-13 MedWeb", "bigbio_language": ["English", "Chinese", "Japanese"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TRANSLATION", "TEXT_CLASSIFICATION"]}
2024-01-29T09:18:37+00:00
[]
[ "en", "zh", "ja" ]
TAGS #multilinguality-multilingual #language-English #language-Chinese #language-Japanese #license-cc-by-4.0 #region-us
# Dataset Card for NTCIR-13 MedWeb ## Dataset Description - Homepage: URL - Pubmed: False - Public: False - Tasks: TRANSL,TXTCLASS NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires to perform a multi-label classification that labels for eight diseases/symptoms must be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n labels for eight diseases/symptoms. The achievements of this task can almost be directly applied to a fundamental engine for actual applications. This task provides pseudo-Twitter messages in a cross-language and multi-label corpus, covering three languages (Japanese, English, and Chinese), and annotated with eight labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache, fever, runny nose, and cold. For more information, see: URL As this dataset also provides a parallel corpus of pseudo-tweets for english, japanese and chinese it can also be used to train translation models between these three languages.
[ "# Dataset Card for NTCIR-13 MedWeb", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TRANSL,TXTCLASS\n\n\nNTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires\nto perform a multi-label classification that labels for eight diseases/symptoms must\nbe assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n\nlabels for eight diseases/symptoms. The achievements of this task can almost be\ndirectly applied to a fundamental engine for actual applications.\n\nThis task provides pseudo-Twitter messages in a cross-language and multi-label corpus,\ncovering three languages (Japanese, English, and Chinese), and annotated with eight\nlabels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,\nfever, runny nose, and cold.\n\nFor more information, see:\nURL\n\nAs this dataset also provides a parallel corpus of pseudo-tweets for english,\njapanese and chinese it can also be used to train translation models between\nthese three languages." ]
[ "TAGS\n#multilinguality-multilingual #language-English #language-Chinese #language-Japanese #license-cc-by-4.0 #region-us \n", "# Dataset Card for NTCIR-13 MedWeb", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TRANSL,TXTCLASS\n\n\nNTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires\nto perform a multi-label classification that labels for eight diseases/symptoms must\nbe assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n\nlabels for eight diseases/symptoms. The achievements of this task can almost be\ndirectly applied to a fundamental engine for actual applications.\n\nThis task provides pseudo-Twitter messages in a cross-language and multi-label corpus,\ncovering three languages (Japanese, English, and Chinese), and annotated with eight\nlabels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,\nfever, runny nose, and cold.\n\nFor more information, see:\nURL\n\nAs this dataset also provides a parallel corpus of pseudo-tweets for english,\njapanese and chinese it can also be used to train translation models between\nthese three languages." ]
3fce95ce6eead304ae1f7bc789a37388401aafe1
# Dataset Card for "fol-00" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
unigram/fol-00
[ "region:us" ]
2024-01-29T10:16:39+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "premise_tptp", "dtype": "string"}, {"name": "hypothesis_tptp", "dtype": "string"}, {"name": "deberta_pred", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 122255996.91323747, "num_examples": 125335}, {"name": "validation", "num_bytes": 15282121.543381268, "num_examples": 15667}, {"name": "test", "num_bytes": 15282121.543381268, "num_examples": 15667}], "download_size": 51428426, "dataset_size": 152820240.0}}
2024-02-15T16:06:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fol-00" More Information needed
[ "# Dataset Card for \"fol-00\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fol-00\"\n\nMore Information needed" ]
798adc8238fdb929ba527d96fbcd2beabe4d990d
# ChartBench: A Benchmark for Complex Visual Reasoning in Charts <a href='https://arxiv.org/abs/2312.15915'><img src='https://img.shields.io/badge/arXiv-2312.15915-b31b1b.svg'></a> <a href='https://github.com/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://github.com/buaacyw/GaussianEditor/blob/master/LICENSE.txt'><img src='https://img.shields.io/badge/License-MIT-blue'></a> [![Dataset](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-blue)](https://huggingface.co/datasets/SincereX/ChartBench-Demo) ## Introduction We propose the challenging ChartBench to evaluate the chart recognition of MLLMs. ![ChartBench Pipeline.](./asset/pipeline.png) We improve the *Acc+* metric to avoid the randomly guessing situations. ![improved Acc+ metric.](./asset/Acc+_vis.png) We collect a larger set of unlabeled charts to emphasize the MLLM's ability to interpret visual information without the aid of annotated data points. ![Chart distributions and ChartCoT.](./asset/contribution.png) ## Todo - [ ] Open source all data of ChartBench. - [x] Open source the evaluate scripts. - [x] Open source the inference scripts. - [x] Open source the demo data (10%). ## Setup Please follow the official repository instructions below to set up the local environment. - <a href='https://huggingface.co/spaces/Salesforce/BLIP2'><img src='https://img.shields.io/badge/BLIP2-https://huggingface.co/spaces/Salesforce/BLIP2-blue'></a> - <a href='https://huggingface.co/docs/transformers/model_doc/instructblip'><img src='https://img.shields.io/badge/InstructBLIP-https://huggingface.co/docs/transformers/model_doc/instructblip-blue'></a> - <a href='https://github.com/THUDM/CogVLM'><img src='https://img.shields.io/badge/CogVLM-https://github.com/THUDM/CogVLM-blue'></a> - <a href='https://github.com/QwenLM/Qwen-VL'><img src='https://img.shields.io/badge/Qwen_VL_Chat-https://github.com/QwenLM/QwenVL-blue'></a> - <a href='https://llava-vl.github.io/'><img src='https://img.shields.io/badge/LLaVA_v1.5-https://llava_vl.github.io/-blue'></a> - <a href='https://github.com/Vision-CAIR/MiniGPT-4'><img src='https://img.shields.io/badge/MiniGPT_v2-https://github.com/VisionCAIR/MiniGPT4-blue'></a> - <a href='https://github.com/THUDM/VisualGLM-6B'><img src='https://img.shields.io/badge/VisualGLM-https://github.com/THUDM/VisualGLM6B-blue'></a> - <a href='https://github.com/X-PLUG/mPLUG-Owl'><img src='https://img.shields.io/badge/mPLUG_Owl-https://github.com/XPLUG/mPLUGOwl-blue'></a> - <a href='https://github.com/InternLM/InternLM-XComposer'><img src='https://img.shields.io/badge/InternLM_XComposer-https://github.com/InternLM/InternLMXComposer-blue'></a> - <a href='https://github.com/shikras/shikra'><img src='https://img.shields.io/badge/Shikra-https://github.com/shikras/shikra-blue'></a> - <a href='https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX'><img src='https://img.shields.io/badge/SPHINX-https://github.com/AlphaVLLM/LLaMA2Accessory/tree/main/SPHINX-blue'></a> - <a href='https://huggingface.co/listen2you002/ChartLlama-13b'><img src='https://img.shields.io/badge/ChartLLaMA-https://huggingface.co/listen2you002/ChartLlama13b-blue'></a> ## Inference 1. Complete the basic environment setup. 2. Set `task_name` in `./Repos/myprompt.py`, such as `test` or `BLIP2_Style`. 3. Select or set the desired system prompt in `./Repos/myprompt.py`. 4. Modify the default path of `CKPT_PATH` in `./Repos/{MODEL_NAME}/run.py`. 5. Run `run.py` following the command format in `./Scripts/inference.sh`. 6. The results are saved by default in `./Eval/{task_name}/{MODEL_NAME}`. 7. Set the parameters in `./Scripts/stat_acc_plus.py` and the statistical results are saved in `./Eval/{task_name}/Eval_Result`. ## Ranking ![ChartBench Pipeline.](./asset/Acc+Rank.png) ## Citation ```bib @article{ChartBench, title={ChartBench: A Benchmark for Complex Visual Reasoning in Charts}, author={Zhengzhuo Xu and Sinan Du and Yiyan Qi and Chengjin Xu and Chun Yuan and Jian Guo}, journal={ArXiv}, year={2023}, volume={abs/2312.15915}, url={https://api.semanticscholar.org/CorpusID:266550948} } ```
SincereX/ChartBench-Demo
[ "arxiv:2312.15915", "region:us" ]
2024-01-29T10:20:41+00:00
{}
2024-01-29T11:10:05+00:00
[ "2312.15915" ]
[]
TAGS #arxiv-2312.15915 #region-us
# ChartBench: A Benchmark for Complex Visual Reasoning in Charts <a href='URL src='URL <a href='URL src='URL <a href='URL src='URL ![Dataset](URL ## Introduction We propose the challenging ChartBench to evaluate the chart recognition of MLLMs. !ChartBench Pipeline. We improve the *Acc+* metric to avoid the randomly guessing situations. !improved Acc+ metric. We collect a larger set of unlabeled charts to emphasize the MLLM's ability to interpret visual information without the aid of annotated data points. !Chart distributions and ChartCoT. ## Todo - [ ] Open source all data of ChartBench. - [x] Open source the evaluate scripts. - [x] Open source the inference scripts. - [x] Open source the demo data (10%). ## Setup Please follow the official repository instructions below to set up the local environment. - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/llava_vl.URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL - <a href='URL src='URL/URL ## Inference 1. Complete the basic environment setup. 2. Set 'task_name' in './Repos/URL', such as 'test' or 'BLIP2_Style'. 3. Select or set the desired system prompt in './Repos/URL'. 4. Modify the default path of 'CKPT_PATH' in './Repos/{MODEL_NAME}/URL'. 5. Run 'URL' following the command format in './Scripts/URL'. 6. The results are saved by default in './Eval/{task_name}/{MODEL_NAME}'. 7. Set the parameters in './Scripts/stat_acc_plus.py' and the statistical results are saved in './Eval/{task_name}/Eval_Result'. ## Ranking !ChartBench Pipeline.
[ "# ChartBench: A Benchmark for Complex Visual Reasoning in Charts\n\n<a href='URL src='URL <a href='URL src='URL <a href='URL src='URL ![Dataset](URL", "## Introduction\n\nWe propose the challenging ChartBench to evaluate the chart recognition of MLLMs. \n!ChartBench Pipeline.\n\nWe improve the *Acc+* metric to avoid the randomly guessing situations.\n!improved Acc+ metric.\n\nWe collect a larger set of unlabeled charts to emphasize the MLLM's ability to interpret visual information without the aid of annotated data points.\n!Chart distributions and ChartCoT.", "## Todo\n- [ ] Open source all data of ChartBench.\n- [x] Open source the evaluate scripts.\n- [x] Open source the inference scripts.\n- [x] Open source the demo data (10%).", "## Setup\nPlease follow the official repository instructions below to set up the local environment.\n\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/llava_vl.URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL", "## Inference\n1. Complete the basic environment setup.\n2. Set 'task_name' in './Repos/URL', such as 'test' or 'BLIP2_Style'.\n3. Select or set the desired system prompt in './Repos/URL'.\n4. Modify the default path of 'CKPT_PATH' in './Repos/{MODEL_NAME}/URL'.\n5. Run 'URL' following the command format in './Scripts/URL'.\n6. The results are saved by default in './Eval/{task_name}/{MODEL_NAME}'.\n7. Set the parameters in './Scripts/stat_acc_plus.py' and the statistical results are saved in './Eval/{task_name}/Eval_Result'.", "## Ranking\n\n\n!ChartBench Pipeline." ]
[ "TAGS\n#arxiv-2312.15915 #region-us \n", "# ChartBench: A Benchmark for Complex Visual Reasoning in Charts\n\n<a href='URL src='URL <a href='URL src='URL <a href='URL src='URL ![Dataset](URL", "## Introduction\n\nWe propose the challenging ChartBench to evaluate the chart recognition of MLLMs. \n!ChartBench Pipeline.\n\nWe improve the *Acc+* metric to avoid the randomly guessing situations.\n!improved Acc+ metric.\n\nWe collect a larger set of unlabeled charts to emphasize the MLLM's ability to interpret visual information without the aid of annotated data points.\n!Chart distributions and ChartCoT.", "## Todo\n- [ ] Open source all data of ChartBench.\n- [x] Open source the evaluate scripts.\n- [x] Open source the inference scripts.\n- [x] Open source the demo data (10%).", "## Setup\nPlease follow the official repository instructions below to set up the local environment.\n\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/llava_vl.URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL\n- <a href='URL src='URL/URL", "## Inference\n1. Complete the basic environment setup.\n2. Set 'task_name' in './Repos/URL', such as 'test' or 'BLIP2_Style'.\n3. Select or set the desired system prompt in './Repos/URL'.\n4. Modify the default path of 'CKPT_PATH' in './Repos/{MODEL_NAME}/URL'.\n5. Run 'URL' following the command format in './Scripts/URL'.\n6. The results are saved by default in './Eval/{task_name}/{MODEL_NAME}'.\n7. Set the parameters in './Scripts/stat_acc_plus.py' and the statistical results are saved in './Eval/{task_name}/Eval_Result'.", "## Ranking\n\n\n!ChartBench Pipeline." ]
9cf58e14071a9a885d0f868d00dac34163b6f968
# Dataset Card for "chonk-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mixamrepijey/chonk-data
[ "region:us" ]
2024-01-29T10:42:45+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 8522685, "num_examples": 718}, {"name": "train", "num_bytes": 76421581, "num_examples": 6434}], "download_size": 36240875, "dataset_size": 84944266}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-30T14:02:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "chonk-data" More Information needed
[ "# Dataset Card for \"chonk-data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"chonk-data\"\n\nMore Information needed" ]
3a0a0c7227ec34e2cf6bbcaca5a89f1959584de3
# Dataset Card for "Bollywood_songs" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
eswardivi/Bollywood_songs
[ "language:en", "region:us" ]
2024-01-29T10:52:50+00:00
{"language": ["en"], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5387340, "num_examples": 999}], "download_size": 2942424, "dataset_size": 5387340}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-04T13:01:39+00:00
[]
[ "en" ]
TAGS #language-English #region-us
# Dataset Card for "Bollywood_songs" More Information needed
[ "# Dataset Card for \"Bollywood_songs\"\n\nMore Information needed" ]
[ "TAGS\n#language-English #region-us \n", "# Dataset Card for \"Bollywood_songs\"\n\nMore Information needed" ]
83982424d68aa2e25e8d9beb23ffac63f4e86537
# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-lora-tuned-8h <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SC99/Mistral-7B-summ-lora-tuned-8h](https://huggingface.co/SC99/Mistral-7B-summ-lora-tuned-8h) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SC99__Mistral-7B-summ-lora-tuned-8h", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T11:29:57.357503](https://huggingface.co/datasets/open-llm-leaderboard/details_SC99__Mistral-7B-summ-lora-tuned-8h/blob/main/results_2024-01-29T11-29-57.357503.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6041074187583433, "acc_stderr": 0.03320878332044893, "acc_norm": 0.6085860377953661, "acc_norm_stderr": 0.033883194330331504, "mc1": 0.5471236230110159, "mc1_stderr": 0.01742558984831402, "mc2": 0.6979827986405281, "mc2_stderr": 0.015101990973729242 }, "harness|arc:challenge|25": { "acc": 0.5870307167235495, "acc_stderr": 0.014388344935398326, "acc_norm": 0.6305460750853242, "acc_norm_stderr": 0.014104578366491887 }, "harness|hellaswag|10": { "acc": 0.673770165305716, "acc_stderr": 0.004678743563766658, "acc_norm": 0.8517227643895638, "acc_norm_stderr": 0.003546483015569106 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6, "acc_stderr": 0.04232073695151589, "acc_norm": 0.6, "acc_norm_stderr": 0.04232073695151589 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.625, "acc_stderr": 0.039397364351956274, "acc_norm": 0.625, "acc_norm_stderr": 0.039397364351956274 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.02898545565233439, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.02898545565233439 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6944444444444444, "acc_stderr": 0.03852084696008534, "acc_norm": 0.6944444444444444, "acc_norm_stderr": 0.03852084696008534 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.4, "acc_stderr": 0.04923659639173309, "acc_norm": 0.4, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5953757225433526, "acc_stderr": 0.03742461193887248, "acc_norm": 0.5953757225433526, "acc_norm_stderr": 0.03742461193887248 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4215686274509804, "acc_stderr": 0.04913595201274498, "acc_norm": 0.4215686274509804, "acc_norm_stderr": 0.04913595201274498 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5319148936170213, "acc_stderr": 0.03261936918467382, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.03261936918467382 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.42105263157894735, "acc_stderr": 0.046446020912223177, "acc_norm": 0.42105263157894735, "acc_norm_stderr": 0.046446020912223177 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6275862068965518, "acc_stderr": 0.04028731532947559, "acc_norm": 0.6275862068965518, "acc_norm_stderr": 0.04028731532947559 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3835978835978836, "acc_stderr": 0.0250437573185202, "acc_norm": 0.3835978835978836, "acc_norm_stderr": 0.0250437573185202 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42063492063492064, "acc_stderr": 0.04415438226743744, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.04415438226743744 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6, "acc_stderr": 0.027869320571664635, "acc_norm": 0.6, "acc_norm_stderr": 0.027869320571664635 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145632, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7393939393939394, "acc_stderr": 0.034277431758165236, "acc_norm": 0.7393939393939394, "acc_norm_stderr": 0.034277431758165236 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7626262626262627, "acc_stderr": 0.030313710538198896, "acc_norm": 0.7626262626262627, "acc_norm_stderr": 0.030313710538198896 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.844559585492228, "acc_stderr": 0.026148483469153303, "acc_norm": 0.844559585492228, "acc_norm_stderr": 0.026148483469153303 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5641025641025641, "acc_stderr": 0.025141801511177495, "acc_norm": 0.5641025641025641, "acc_norm_stderr": 0.025141801511177495 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3, "acc_stderr": 0.02794045713622839, "acc_norm": 0.3, "acc_norm_stderr": 0.02794045713622839 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6638655462184874, "acc_stderr": 0.030684737115135363, "acc_norm": 0.6638655462184874, "acc_norm_stderr": 0.030684737115135363 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7908256880733945, "acc_stderr": 0.017437937173343233, "acc_norm": 0.7908256880733945, "acc_norm_stderr": 0.017437937173343233 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4398148148148148, "acc_stderr": 0.03385177976044812, "acc_norm": 0.4398148148148148, "acc_norm_stderr": 0.03385177976044812 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7598039215686274, "acc_stderr": 0.02998373305591361, "acc_norm": 0.7598039215686274, "acc_norm_stderr": 0.02998373305591361 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7510548523206751, "acc_stderr": 0.028146970599422644, "acc_norm": 0.7510548523206751, "acc_norm_stderr": 0.028146970599422644 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6188340807174888, "acc_stderr": 0.03259625118416827, "acc_norm": 0.6188340807174888, "acc_norm_stderr": 0.03259625118416827 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7175572519083969, "acc_stderr": 0.03948406125768361, "acc_norm": 0.7175572519083969, "acc_norm_stderr": 0.03948406125768361 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7222222222222222, "acc_stderr": 0.04330043749650743, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.04330043749650743 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.033519538795212696, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.033519538795212696 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.04726835553719099, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.04726835553719099 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.023086635086841407, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.023086635086841407 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7803320561941252, "acc_stderr": 0.01480538447837115, "acc_norm": 0.7803320561941252, "acc_norm_stderr": 0.01480538447837115 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.684971098265896, "acc_stderr": 0.025009313790069727, "acc_norm": 0.684971098265896, "acc_norm_stderr": 0.025009313790069727 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2837988826815642, "acc_stderr": 0.015078358970751753, "acc_norm": 0.2837988826815642, "acc_norm_stderr": 0.015078358970751753 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6699346405228758, "acc_stderr": 0.026925654653615703, "acc_norm": 0.6699346405228758, "acc_norm_stderr": 0.026925654653615703 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6913183279742765, "acc_stderr": 0.026236965881153262, "acc_norm": 0.6913183279742765, "acc_norm_stderr": 0.026236965881153262 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.691358024691358, "acc_stderr": 0.025702640260603742, "acc_norm": 0.691358024691358, "acc_norm_stderr": 0.025702640260603742 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46099290780141844, "acc_stderr": 0.02973659252642444, "acc_norm": 0.46099290780141844, "acc_norm_stderr": 0.02973659252642444 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4315514993481095, "acc_stderr": 0.012650007999463872, "acc_norm": 0.4315514993481095, "acc_norm_stderr": 0.012650007999463872 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6213235294117647, "acc_stderr": 0.02946513363977613, "acc_norm": 0.6213235294117647, "acc_norm_stderr": 0.02946513363977613 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6274509803921569, "acc_stderr": 0.019559646809215934, "acc_norm": 0.6274509803921569, "acc_norm_stderr": 0.019559646809215934 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7142857142857143, "acc_stderr": 0.0289205832206756, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.0289205832206756 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6716417910447762, "acc_stderr": 0.033206858897443244, "acc_norm": 0.6716417910447762, "acc_norm_stderr": 0.033206858897443244 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036846, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036846 }, "harness|hendrycksTest-virology|5": { "acc": 0.4939759036144578, "acc_stderr": 0.03892212195333047, "acc_norm": 0.4939759036144578, "acc_norm_stderr": 0.03892212195333047 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8304093567251462, "acc_stderr": 0.02878210810540171, "acc_norm": 0.8304093567251462, "acc_norm_stderr": 0.02878210810540171 }, "harness|truthfulqa:mc|0": { "mc1": 0.5471236230110159, "mc1_stderr": 0.01742558984831402, "mc2": 0.6979827986405281, "mc2_stderr": 0.015101990973729242 }, "harness|winogrande|5": { "acc": 0.7734806629834254, "acc_stderr": 0.011764149054698338 }, "harness|gsm8k|5": { "acc": 0.39196360879454134, "acc_stderr": 0.013447140886023818 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SC99__Mistral-7B-summ-lora-tuned-8h
[ "region:us" ]
2024-01-29T11:32:14+00:00
{"pretty_name": "Evaluation run of SC99/Mistral-7B-summ-lora-tuned-8h", "dataset_summary": "Dataset automatically created during the evaluation run of model [SC99/Mistral-7B-summ-lora-tuned-8h](https://huggingface.co/SC99/Mistral-7B-summ-lora-tuned-8h) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SC99__Mistral-7B-summ-lora-tuned-8h\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T11:29:57.357503](https://huggingface.co/datasets/open-llm-leaderboard/details_SC99__Mistral-7B-summ-lora-tuned-8h/blob/main/results_2024-01-29T11-29-57.357503.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6041074187583433,\n \"acc_stderr\": 0.03320878332044893,\n \"acc_norm\": 0.6085860377953661,\n \"acc_norm_stderr\": 0.033883194330331504,\n \"mc1\": 0.5471236230110159,\n \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.6979827986405281,\n \"mc2_stderr\": 0.015101990973729242\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5870307167235495,\n \"acc_stderr\": 0.014388344935398326,\n \"acc_norm\": 0.6305460750853242,\n \"acc_norm_stderr\": 0.014104578366491887\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.673770165305716,\n \"acc_stderr\": 0.004678743563766658,\n \"acc_norm\": 0.8517227643895638,\n \"acc_norm_stderr\": 0.003546483015569106\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5953757225433526,\n \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.5953757225433526,\n \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467382,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467382\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947559,\n \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947559\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3835978835978836,\n \"acc_stderr\": 0.0250437573185202,\n \"acc_norm\": 0.3835978835978836,\n \"acc_norm_stderr\": 0.0250437573185202\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.027869320571664635,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.027869320571664635\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7393939393939394,\n \"acc_stderr\": 0.034277431758165236,\n \"acc_norm\": 0.7393939393939394,\n \"acc_norm_stderr\": 0.034277431758165236\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.844559585492228,\n \"acc_stderr\": 0.026148483469153303,\n \"acc_norm\": 0.844559585492228,\n \"acc_norm_stderr\": 0.026148483469153303\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5641025641025641,\n \"acc_stderr\": 0.025141801511177495,\n \"acc_norm\": 0.5641025641025641,\n \"acc_norm_stderr\": 0.025141801511177495\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.02794045713622839,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.02794045713622839\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135363,\n \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135363\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7908256880733945,\n \"acc_stderr\": 0.017437937173343233,\n \"acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.017437937173343233\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4398148148148148,\n \"acc_stderr\": 0.03385177976044812,\n \"acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.03385177976044812\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591361,\n \"acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591361\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6188340807174888,\n \"acc_stderr\": 0.03259625118416827,\n \"acc_norm\": 0.6188340807174888,\n \"acc_norm_stderr\": 0.03259625118416827\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.04330043749650743,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.04330043749650743\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7803320561941252,\n \"acc_stderr\": 0.01480538447837115,\n \"acc_norm\": 0.7803320561941252,\n \"acc_norm_stderr\": 0.01480538447837115\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069727,\n \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069727\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2837988826815642,\n \"acc_stderr\": 0.015078358970751753,\n \"acc_norm\": 0.2837988826815642,\n \"acc_norm_stderr\": 0.015078358970751753\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.026925654653615703,\n \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.026925654653615703\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n \"acc_stderr\": 0.026236965881153262,\n \"acc_norm\": 0.6913183279742765,\n \"acc_norm_stderr\": 0.026236965881153262\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.691358024691358,\n \"acc_stderr\": 0.025702640260603742,\n \"acc_norm\": 0.691358024691358,\n \"acc_norm_stderr\": 0.025702640260603742\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4315514993481095,\n \"acc_stderr\": 0.012650007999463872,\n \"acc_norm\": 0.4315514993481095,\n \"acc_norm_stderr\": 0.012650007999463872\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6274509803921569,\n \"acc_stderr\": 0.019559646809215934,\n \"acc_norm\": 0.6274509803921569,\n \"acc_norm_stderr\": 0.019559646809215934\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6716417910447762,\n \"acc_stderr\": 0.033206858897443244,\n \"acc_norm\": 0.6716417910447762,\n \"acc_norm_stderr\": 0.033206858897443244\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333047,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333047\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5471236230110159,\n \"mc1_stderr\": 0.01742558984831402,\n \"mc2\": 0.6979827986405281,\n \"mc2_stderr\": 0.015101990973729242\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698338\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.39196360879454134,\n \"acc_stderr\": 0.013447140886023818\n }\n}\n```", "repo_url": "https://huggingface.co/SC99/Mistral-7B-summ-lora-tuned-8h", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|arc:challenge|25_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|gsm8k|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hellaswag|10_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T11-29-57.357503.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["**/details_harness|winogrande|5_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T11-29-57.357503.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T11_29_57.357503", "path": ["results_2024-01-29T11-29-57.357503.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T11-29-57.357503.parquet"]}]}]}
2024-01-29T11:32:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-lora-tuned-8h Dataset automatically created during the evaluation run of model SC99/Mistral-7B-summ-lora-tuned-8h on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T11:29:57.357503(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-lora-tuned-8h\n\n\n\nDataset automatically created during the evaluation run of model SC99/Mistral-7B-summ-lora-tuned-8h on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T11:29:57.357503(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-lora-tuned-8h\n\n\n\nDataset automatically created during the evaluation run of model SC99/Mistral-7B-summ-lora-tuned-8h on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T11:29:57.357503(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
0f82f241b2dd5fb7e2d7c1edcbf2f393702a5cd3
# Dataset Card for "ultrafeedback-capybara-mix-5k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
argilla/ultrafeedback-capybara-mix-5k
[ "region:us" ]
2024-01-29T11:37:02+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "conversation", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "original_response", "dtype": "string"}, {"name": "generation_prompt", "sequence": "string"}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "new_generations", "sequence": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rating_chosen", "dtype": "int64"}, {"name": "rating_rejected", "dtype": "int64"}, {"name": "chosen_model", "dtype": "string"}, {"name": "rejected_model", "dtype": "string"}, {"name": "turns", "dtype": "int64"}, {"name": "chosen-rating", "dtype": "float64"}, {"name": "chosen-model", "dtype": "string"}, {"name": "rejected-rating", "dtype": "float64"}, {"name": "rejected-model", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40207332.027373314, "num_examples": 4500}, {"name": "test", "num_bytes": 4467481.336374813, "num_examples": 500}], "download_size": 60430325, "dataset_size": 44674813.363748126}}
2024-01-29T11:37:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback-capybara-mix-5k" More Information needed
[ "# Dataset Card for \"ultrafeedback-capybara-mix-5k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback-capybara-mix-5k\"\n\nMore Information needed" ]
4585332f52ebfe21df719590a5cae8630ec79fef
# Pandora Instruct An instruction dataset for Supervised fine-tuning of the Pandora Large Language Model (LLM). The dataset is based on the existing datasets: - (`teknium/openhermes`)[https://huggingface.co/datasets/teknium/openhermes] - (`ise-uiuc/magicoder-evol-instruct-110k`)[https://huggingface.co/datasets/ise-uiuc/magicoder-evol-instruct-110k] - (`ise-uiuc/magicoder-oss-instruct-75k`)[https://huggingface.co/datasets/ise-uiuc/magicoder-oss-instruct-75k] - (`glaiveai/glaive-function-calling-v2`)[https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2] ## Copyright and license Copyright (c) 2024, Danilo Peixoto Ferreira. All rights reserved. Project developed under a [BSD-3-Clause license](LICENSE.md).
danilopeixoto/pandora-instruct
[ "task_categories:text-generation", "size_categories:100K<n<1M", "license:bsd-3-clause", "fine-tuning", "function-calling", "instruct", "sft", "region:us" ]
2024-01-29T11:54:56+00:00
{"license": "bsd-3-clause", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Pandora Instruct", "tags": ["fine-tuning", "function-calling", "instruct", "sft"]}
2024-01-29T11:56:00+00:00
[]
[]
TAGS #task_categories-text-generation #size_categories-100K<n<1M #license-bsd-3-clause #fine-tuning #function-calling #instruct #sft #region-us
# Pandora Instruct An instruction dataset for Supervised fine-tuning of the Pandora Large Language Model (LLM). The dataset is based on the existing datasets: - ('teknium/openhermes')[URL - ('ise-uiuc/magicoder-evol-instruct-110k')[URL - ('ise-uiuc/magicoder-oss-instruct-75k')[URL - ('glaiveai/glaive-function-calling-v2')[URL ## Copyright and license Copyright (c) 2024, Danilo Peixoto Ferreira. All rights reserved. Project developed under a BSD-3-Clause license.
[ "# Pandora Instruct\n\nAn instruction dataset for Supervised fine-tuning of the Pandora Large Language Model (LLM).\n\nThe dataset is based on the existing datasets:\n\n- ('teknium/openhermes')[URL\n- ('ise-uiuc/magicoder-evol-instruct-110k')[URL\n- ('ise-uiuc/magicoder-oss-instruct-75k')[URL\n- ('glaiveai/glaive-function-calling-v2')[URL", "## Copyright and license\n\nCopyright (c) 2024, Danilo Peixoto Ferreira. All rights reserved.\n\nProject developed under a BSD-3-Clause license." ]
[ "TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #license-bsd-3-clause #fine-tuning #function-calling #instruct #sft #region-us \n", "# Pandora Instruct\n\nAn instruction dataset for Supervised fine-tuning of the Pandora Large Language Model (LLM).\n\nThe dataset is based on the existing datasets:\n\n- ('teknium/openhermes')[URL\n- ('ise-uiuc/magicoder-evol-instruct-110k')[URL\n- ('ise-uiuc/magicoder-oss-instruct-75k')[URL\n- ('glaiveai/glaive-function-calling-v2')[URL", "## Copyright and license\n\nCopyright (c) 2024, Danilo Peixoto Ferreira. All rights reserved.\n\nProject developed under a BSD-3-Clause license." ]
ac9d30ac7351c5f9bb31970b7ae82182b8b4f5e8
# Dataset Card for "LogicNLI" ```bib @inproceedings{tian-etal-2021-diagnosing, title = "Diagnosing the First-Order Logical Reasoning Ability Through {L}ogic{NLI}", author = "Tian, Jidong and Li, Yitian and Chen, Wenqing and Xiao, Liqiang and He, Hao and Jin, Yaohui", editor = "Moens, Marie-Francine and Huang, Xuanjing and Specia, Lucia and Yih, Scott Wen-tau", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.303", doi = "10.18653/v1/2021.emnlp-main.303", pages = "3738--3747", abstract = "Recently, language models (LMs) have achieved significant performance on many NLU tasks, which has spurred widespread interest for their possible applications in the scientific and social area. However, LMs have faced much criticism of whether they are truly capable of reasoning in NLU. In this work, we propose a diagnostic method for first-order logic (FOL) reasoning with a new proposed benchmark, LogicNLI. LogicNLI is an NLI-style dataset that effectively disentangles the target FOL reasoning from commonsense inference and can be used to diagnose LMs from four perspectives: accuracy, robustness, generalization, and interpretability. Experiments on BERT, RoBERTa, and XLNet, have uncovered the weaknesses of these LMs on FOL reasoning, which motivates future exploration to enhance the reasoning ability.", } ``` https://github.com/omnilabNLP/LogicNLI
tasksource/LogicNLI
[ "region:us" ]
2024-01-29T12:10:24+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19241020, "num_examples": 16000}, {"name": "validation", "num_bytes": 2359422, "num_examples": 2000}, {"name": "test", "num_bytes": 2368137, "num_examples": 2000}], "download_size": 713871, "dataset_size": 23968579}}
2024-01-29T12:26:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "LogicNLI" URL
[ "# Dataset Card for \"LogicNLI\"\n\n\n\nURL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"LogicNLI\"\n\n\n\nURL" ]
fd7648756a1586956e13726d43627480738a64cd
About 2 years of IoT-based smart parking lot usage data collected on ThingSpeak which has the slot availability information with respect to timestamps.
suwesh/IoT-based-SmartParkingSystem-dataset
[ "license:ecl-2.0", "region:us" ]
2024-01-29T12:50:13+00:00
{"license": "ecl-2.0"}
2024-01-29T12:52:42+00:00
[]
[]
TAGS #license-ecl-2.0 #region-us
About 2 years of IoT-based smart parking lot usage data collected on ThingSpeak which has the slot availability information with respect to timestamps.
[]
[ "TAGS\n#license-ecl-2.0 #region-us \n" ]
8a90da59605d99ae1423145e7823611a4ff0c21f
The **IndustrialDetectionStaticCameras** dataset is divided into five primary files named `videoY`, where `Y=1,2,3,4,5`. Each `videoY` folder contains the following: - The video of the scene in `.mp4` format: `videoY.mp4` - A folder with the images of each frame of the video: `imgs_videoY` - A folder that includes for each frame a `.txt` file that holds for each labelled object a line with the annotation in kitti format: `annotations_videoY` **Remark:** Each label file contains a set of lines, with each line representing the annotation for a single object in the corresponding image. The format of each line is as follows: `<object_type> <truncation> <occlusion> <alpha> <left> <top> <right> <bottom> <height> <width> <length> <x> <y> <z> <rotation_y>`, where only the fields `<object_type>, <left>, <top>, <right>, <bottom>` and `<rotation_y>` are considered. The `<rotation_y>` field has been used to indicate whether the labelled object is a static object in the scene or not -value `1` represents that object is static and value `0` symbolizes that it is not-. ### Download the dataset: ```python from datasets import load_dataset dataset = load_dataset("jjldo21/IndustrialDetectionStaticCameras") ```
jjldo21/IndustrialDetectionStaticCameras
[ "task_categories:object-detection", "size_categories:1K<n<10K", "license:mit", "industry", "doi:10.57967/hf/1710", "region:us" ]
2024-01-29T13:08:56+00:00
{"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "tags": ["industry"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "sequence": [{"name": "object_type", "dtype": "string"}, {"name": "truncation", "dtype": "float32"}, {"name": "occlusion", "dtype": "int32"}, {"name": "alpha", "dtype": "int32"}, {"name": "left", "dtype": "float32"}, {"name": "top", "dtype": "float32"}, {"name": "right", "dtype": "float32"}, {"name": "bottom", "dtype": "float32"}, {"name": "height", "dtype": "int32"}, {"name": "width", "dtype": "int32"}, {"name": "length", "dtype": "int32"}, {"name": "x", "dtype": "int32"}, {"name": "y", "dtype": "int32"}, {"name": "z", "dtype": "int32"}, {"name": "rotation_y", "dtype": "int32"}]}], "splits": [{"name": "video1", "num_bytes": 4497677.132999999, "num_examples": 1261}, {"name": "video2", "num_bytes": 4116557.136, "num_examples": 1221}, {"name": "video3", "num_bytes": 4034190.129, "num_examples": 1221}, {"name": "video4", "num_bytes": 5164007.345000001, "num_examples": 1481}, {"name": "video5", "num_bytes": 4733783.518, "num_examples": 1301}], "download_size": 19236723, "dataset_size": 22546215.261}, "configs": [{"config_name": "default", "data_files": [{"split": "video1", "path": "data/video1-*"}, {"split": "video2", "path": "data/video2-*"}, {"split": "video3", "path": "data/video3-*"}, {"split": "video4", "path": "data/video4-*"}, {"split": "video5", "path": "data/video5-*"}]}]}
2024-02-07T20:41:45+00:00
[]
[]
TAGS #task_categories-object-detection #size_categories-1K<n<10K #license-mit #industry #doi-10.57967/hf/1710 #region-us
The IndustrialDetectionStaticCameras dataset is divided into five primary files named 'videoY', where 'Y=1,2,3,4,5'. Each 'videoY' folder contains the following: - The video of the scene in '.mp4' format: 'videoY.mp4' - A folder with the images of each frame of the video: 'imgs_videoY' - A folder that includes for each frame a '.txt' file that holds for each labelled object a line with the annotation in kitti format: 'annotations_videoY' Remark: Each label file contains a set of lines, with each line representing the annotation for a single object in the corresponding image. The format of each line is as follows: '<object_type> <truncation> <occlusion> <alpha> <left> <top> <right> <bottom> <height> <width> <length> <x> <y> <z> <rotation_y>', where only the fields '<object_type>, <left>, <top>, <right>, <bottom>' and '<rotation_y>' are considered. The '<rotation_y>' field has been used to indicate whether the labelled object is a static object in the scene or not -value '1' represents that object is static and value '0' symbolizes that it is not-. ### Download the dataset:
[ "### Download the dataset:" ]
[ "TAGS\n#task_categories-object-detection #size_categories-1K<n<10K #license-mit #industry #doi-10.57967/hf/1710 #region-us \n", "### Download the dataset:" ]
071fbcb9f2ac471ad5f8cc4474516cfce547f6c6
{"text": "### Human: Who founded ABCDE Capital in 2022. ### Assistant: Hey! ABCDE Capital is founded by Du Jun and BMAN in 2022."}
syhabcde/loratest
[ "region:us" ]
2024-01-29T13:11:20+00:00
{}
2024-01-29T13:48:04+00:00
[]
[]
TAGS #region-us
{"text": "### Human: Who founded ABCDE Capital in 2022. ### Assistant: Hey! ABCDE Capital is founded by Du Jun and BMAN in 2022."}
[ "### Human: Who founded ABCDE Capital in 2022. ### Assistant: Hey! ABCDE Capital is founded by Du Jun and BMAN in 2022.\"}" ]
[ "TAGS\n#region-us \n", "### Human: Who founded ABCDE Capital in 2022. ### Assistant: Hey! ABCDE Capital is founded by Du Jun and BMAN in 2022.\"}" ]
617724e45705bd1ac4446901ff8a761955541455
If you, like me, are interested in using AI to win sports, reach out to me to collaborate on this project. The purpose of this dataset would be to accomplish numerous tasks including but no limited to creating an AI nfl head coach, general manager, refs, predicting outcomes... Numerous machine-learning steps need to be accomplished on the way. Those include the following: 1: Take NFL game footage (all 22) and split it into plays 2: Classify each play according to many categorical groupings including but not limited to: - Play type/ play name - Play result (literal) - Play formations - Play value (quantitative) - Individual player actions - Individual player assignments/reads - Individual player success on assignemnts - Team success on play - Flags on play 3. A realistic style NFL simulation can be developed and machine learning can be used to coorelate footage to simulated plays. - Simulation should include individual player assignments per play (such that an NFL head coach could verbally describe to their players) - Simulation should not treat all players (or all players of same position) as identical, rather it should create quauntiative descriptions for each player including levels of uncertainty 4. Once a simulation that corresponds well with actual gameplay and coach-style-descriptions of play, reinforcement learning can be used to develop advanced gameplay strategies (like Chess and GO) which can be translated into coaching techniques 5. Player value should be easier to determine via this simulation. Steps to take: 1. Gather relevant footage (likely need to buy it from NFL) 2. Design algorithms to expand the dataset by including the information that can be gained from the footage. (Probably using human annotation which can cause problems) (Humans might not be great at adequately valuing play outcomes) 3. Use expanded dataset to accopmlish tasks aboves.
dolly-the-sheep/NFL_game_footage
[ "region:us" ]
2024-01-29T13:14:31+00:00
{}
2024-01-29T13:46:59+00:00
[]
[]
TAGS #region-us
If you, like me, are interested in using AI to win sports, reach out to me to collaborate on this project. The purpose of this dataset would be to accomplish numerous tasks including but no limited to creating an AI nfl head coach, general manager, refs, predicting outcomes... Numerous machine-learning steps need to be accomplished on the way. Those include the following: 1: Take NFL game footage (all 22) and split it into plays 2: Classify each play according to many categorical groupings including but not limited to: - Play type/ play name - Play result (literal) - Play formations - Play value (quantitative) - Individual player actions - Individual player assignments/reads - Individual player success on assignemnts - Team success on play - Flags on play 3. A realistic style NFL simulation can be developed and machine learning can be used to coorelate footage to simulated plays. - Simulation should include individual player assignments per play (such that an NFL head coach could verbally describe to their players) - Simulation should not treat all players (or all players of same position) as identical, rather it should create quauntiative descriptions for each player including levels of uncertainty 4. Once a simulation that corresponds well with actual gameplay and coach-style-descriptions of play, reinforcement learning can be used to develop advanced gameplay strategies (like Chess and GO) which can be translated into coaching techniques 5. Player value should be easier to determine via this simulation. Steps to take: 1. Gather relevant footage (likely need to buy it from NFL) 2. Design algorithms to expand the dataset by including the information that can be gained from the footage. (Probably using human annotation which can cause problems) (Humans might not be great at adequately valuing play outcomes) 3. Use expanded dataset to accopmlish tasks aboves.
[]
[ "TAGS\n#region-us \n" ]
9372de27599095ee086e9ae0701e61503d36243b
# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-ia3-tuned-8h <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SC99/Mistral-7B-summ-ia3-tuned-8h](https://huggingface.co/SC99/Mistral-7B-summ-ia3-tuned-8h) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SC99__Mistral-7B-summ-ia3-tuned-8h", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T13:30:16.956785](https://huggingface.co/datasets/open-llm-leaderboard/details_SC99__Mistral-7B-summ-ia3-tuned-8h/blob/main/results_2024-01-29T13-30-16.956785.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.598517503564931, "acc_stderr": 0.03329970966372362, "acc_norm": 0.60343791220307, "acc_norm_stderr": 0.03397979412812745, "mc1": 0.5520195838433293, "mc1_stderr": 0.017408513063422913, "mc2": 0.6830892289108447, "mc2_stderr": 0.015395499999839348 }, "harness|arc:challenge|25": { "acc": 0.5571672354948806, "acc_stderr": 0.014515573873348897, "acc_norm": 0.6117747440273038, "acc_norm_stderr": 0.01424161420741405 }, "harness|hellaswag|10": { "acc": 0.6754630551682932, "acc_stderr": 0.004672447046820004, "acc_norm": 0.8514240191196972, "acc_norm_stderr": 0.003549431247907358 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.04688261722621503, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621503 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.562962962962963, "acc_stderr": 0.04284958639753401, "acc_norm": 0.562962962962963, "acc_norm_stderr": 0.04284958639753401 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6578947368421053, "acc_stderr": 0.03860731599316092, "acc_norm": 0.6578947368421053, "acc_norm_stderr": 0.03860731599316092 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6754716981132075, "acc_stderr": 0.02881561571343211, "acc_norm": 0.6754716981132075, "acc_norm_stderr": 0.02881561571343211 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6875, "acc_stderr": 0.038760854559127644, "acc_norm": 0.6875, "acc_norm_stderr": 0.038760854559127644 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6011560693641619, "acc_stderr": 0.0373362665538351, "acc_norm": 0.6011560693641619, "acc_norm_stderr": 0.0373362665538351 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5234042553191489, "acc_stderr": 0.032650194750335815, "acc_norm": 0.5234042553191489, "acc_norm_stderr": 0.032650194750335815 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.046151869625837026, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.046151869625837026 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6, "acc_stderr": 0.040824829046386284, "acc_norm": 0.6, "acc_norm_stderr": 0.040824829046386284 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.38095238095238093, "acc_stderr": 0.025010749116137602, "acc_norm": 0.38095238095238093, "acc_norm_stderr": 0.025010749116137602 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4126984126984127, "acc_stderr": 0.04403438954768176, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.04403438954768176 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6387096774193548, "acc_stderr": 0.02732754844795755, "acc_norm": 0.6387096774193548, "acc_norm_stderr": 0.02732754844795755 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4729064039408867, "acc_stderr": 0.03512819077876106, "acc_norm": 0.4729064039408867, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.59, "acc_stderr": 0.04943110704237102, "acc_norm": 0.59, "acc_norm_stderr": 0.04943110704237102 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7151515151515152, "acc_stderr": 0.03524390844511781, "acc_norm": 0.7151515151515152, "acc_norm_stderr": 0.03524390844511781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7626262626262627, "acc_stderr": 0.030313710538198896, "acc_norm": 0.7626262626262627, "acc_norm_stderr": 0.030313710538198896 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8549222797927462, "acc_stderr": 0.02541634309630643, "acc_norm": 0.8549222797927462, "acc_norm_stderr": 0.02541634309630643 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5692307692307692, "acc_stderr": 0.02510682066053975, "acc_norm": 0.5692307692307692, "acc_norm_stderr": 0.02510682066053975 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3074074074074074, "acc_stderr": 0.028133252578815642, "acc_norm": 0.3074074074074074, "acc_norm_stderr": 0.028133252578815642 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6722689075630253, "acc_stderr": 0.030489911417673227, "acc_norm": 0.6722689075630253, "acc_norm_stderr": 0.030489911417673227 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.0386155754625517, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.0386155754625517 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7926605504587156, "acc_stderr": 0.01738141556360868, "acc_norm": 0.7926605504587156, "acc_norm_stderr": 0.01738141556360868 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.44907407407407407, "acc_stderr": 0.03392238405321616, "acc_norm": 0.44907407407407407, "acc_norm_stderr": 0.03392238405321616 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7303921568627451, "acc_stderr": 0.031145570659486782, "acc_norm": 0.7303921568627451, "acc_norm_stderr": 0.031145570659486782 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7637130801687764, "acc_stderr": 0.027652153144159256, "acc_norm": 0.7637130801687764, "acc_norm_stderr": 0.027652153144159256 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6233183856502242, "acc_stderr": 0.032521134899291884, "acc_norm": 0.6233183856502242, "acc_norm_stderr": 0.032521134899291884 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7022900763358778, "acc_stderr": 0.040103589424622034, "acc_norm": 0.7022900763358778, "acc_norm_stderr": 0.040103589424622034 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228732, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228732 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7037037037037037, "acc_stderr": 0.04414343666854933, "acc_norm": 0.7037037037037037, "acc_norm_stderr": 0.04414343666854933 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7177914110429447, "acc_stderr": 0.03536117886664742, "acc_norm": 0.7177914110429447, "acc_norm_stderr": 0.03536117886664742 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.35714285714285715, "acc_stderr": 0.04547960999764376, "acc_norm": 0.35714285714285715, "acc_norm_stderr": 0.04547960999764376 }, "harness|hendrycksTest-management|5": { "acc": 0.7475728155339806, "acc_stderr": 0.04301250399690879, "acc_norm": 0.7475728155339806, "acc_norm_stderr": 0.04301250399690879 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8205128205128205, "acc_stderr": 0.025140935950335445, "acc_norm": 0.8205128205128205, "acc_norm_stderr": 0.025140935950335445 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7777777777777778, "acc_stderr": 0.014866821664709588, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.014866821664709588 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6878612716763006, "acc_stderr": 0.024946792225272314, "acc_norm": 0.6878612716763006, "acc_norm_stderr": 0.024946792225272314 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2927374301675978, "acc_stderr": 0.015218109544410172, "acc_norm": 0.2927374301675978, "acc_norm_stderr": 0.015218109544410172 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6699346405228758, "acc_stderr": 0.026925654653615697, "acc_norm": 0.6699346405228758, "acc_norm_stderr": 0.026925654653615697 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6752411575562701, "acc_stderr": 0.026596782287697043, "acc_norm": 0.6752411575562701, "acc_norm_stderr": 0.026596782287697043 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6882716049382716, "acc_stderr": 0.025773111169630446, "acc_norm": 0.6882716049382716, "acc_norm_stderr": 0.025773111169630446 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.46808510638297873, "acc_stderr": 0.029766675075873866, "acc_norm": 0.46808510638297873, "acc_norm_stderr": 0.029766675075873866 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4230769230769231, "acc_stderr": 0.01261820406658839, "acc_norm": 0.4230769230769231, "acc_norm_stderr": 0.01261820406658839 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6323529411764706, "acc_stderr": 0.02928941340940319, "acc_norm": 0.6323529411764706, "acc_norm_stderr": 0.02928941340940319 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6241830065359477, "acc_stderr": 0.01959402113657744, "acc_norm": 0.6241830065359477, "acc_norm_stderr": 0.01959402113657744 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7181818181818181, "acc_stderr": 0.043091187099464585, "acc_norm": 0.7181818181818181, "acc_norm_stderr": 0.043091187099464585 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7061224489795919, "acc_stderr": 0.02916273841024977, "acc_norm": 0.7061224489795919, "acc_norm_stderr": 0.02916273841024977 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7164179104477612, "acc_stderr": 0.03187187537919797, "acc_norm": 0.7164179104477612, "acc_norm_stderr": 0.03187187537919797 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.82, "acc_stderr": 0.03861229196653693, "acc_norm": 0.82, "acc_norm_stderr": 0.03861229196653693 }, "harness|hendrycksTest-virology|5": { "acc": 0.4879518072289157, "acc_stderr": 0.03891364495835821, "acc_norm": 0.4879518072289157, "acc_norm_stderr": 0.03891364495835821 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.5520195838433293, "mc1_stderr": 0.017408513063422913, "mc2": 0.6830892289108447, "mc2_stderr": 0.015395499999839348 }, "harness|winogrande|5": { "acc": 0.771112865035517, "acc_stderr": 0.011807360224025386 }, "harness|gsm8k|5": { "acc": 0.3661865049279757, "acc_stderr": 0.013270100238748835 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SC99__Mistral-7B-summ-ia3-tuned-8h
[ "region:us" ]
2024-01-29T13:32:34+00:00
{"pretty_name": "Evaluation run of SC99/Mistral-7B-summ-ia3-tuned-8h", "dataset_summary": "Dataset automatically created during the evaluation run of model [SC99/Mistral-7B-summ-ia3-tuned-8h](https://huggingface.co/SC99/Mistral-7B-summ-ia3-tuned-8h) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SC99__Mistral-7B-summ-ia3-tuned-8h\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T13:30:16.956785](https://huggingface.co/datasets/open-llm-leaderboard/details_SC99__Mistral-7B-summ-ia3-tuned-8h/blob/main/results_2024-01-29T13-30-16.956785.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.598517503564931,\n \"acc_stderr\": 0.03329970966372362,\n \"acc_norm\": 0.60343791220307,\n \"acc_norm_stderr\": 0.03397979412812745,\n \"mc1\": 0.5520195838433293,\n \"mc1_stderr\": 0.017408513063422913,\n \"mc2\": 0.6830892289108447,\n \"mc2_stderr\": 0.015395499999839348\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5571672354948806,\n \"acc_stderr\": 0.014515573873348897,\n \"acc_norm\": 0.6117747440273038,\n \"acc_norm_stderr\": 0.01424161420741405\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6754630551682932,\n \"acc_stderr\": 0.004672447046820004,\n \"acc_norm\": 0.8514240191196972,\n \"acc_norm_stderr\": 0.003549431247907358\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621503,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621503\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.032650194750335815,\n \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.032650194750335815\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.040824829046386284,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.040824829046386284\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137602,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137602\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6387096774193548,\n \"acc_stderr\": 0.02732754844795755,\n \"acc_norm\": 0.6387096774193548,\n \"acc_norm_stderr\": 0.02732754844795755\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.03524390844511781,\n \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.03524390844511781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.02541634309630643,\n \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.02541634309630643\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5692307692307692,\n \"acc_stderr\": 0.02510682066053975,\n \"acc_norm\": 0.5692307692307692,\n \"acc_norm_stderr\": 0.02510682066053975\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815642,\n \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815642\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.030489911417673227,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.030489911417673227\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.0386155754625517,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.0386155754625517\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7926605504587156,\n \"acc_stderr\": 0.01738141556360868,\n \"acc_norm\": 0.7926605504587156,\n \"acc_norm_stderr\": 0.01738141556360868\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.44907407407407407,\n \"acc_stderr\": 0.03392238405321616,\n \"acc_norm\": 0.44907407407407407,\n \"acc_norm_stderr\": 0.03392238405321616\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7303921568627451,\n \"acc_stderr\": 0.031145570659486782,\n \"acc_norm\": 0.7303921568627451,\n \"acc_norm_stderr\": 0.031145570659486782\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159256,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159256\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n \"acc_stderr\": 0.032521134899291884,\n \"acc_norm\": 0.6233183856502242,\n \"acc_norm_stderr\": 0.032521134899291884\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.040103589424622034,\n \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.040103589424622034\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.04414343666854933,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.04414343666854933\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.04547960999764376,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.04547960999764376\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690879,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690879\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8205128205128205,\n \"acc_stderr\": 0.025140935950335445,\n \"acc_norm\": 0.8205128205128205,\n \"acc_norm_stderr\": 0.025140935950335445\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.014866821664709588,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.014866821664709588\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.024946792225272314,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.024946792225272314\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2927374301675978,\n \"acc_stderr\": 0.015218109544410172,\n \"acc_norm\": 0.2927374301675978,\n \"acc_norm_stderr\": 0.015218109544410172\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.026925654653615697,\n \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.026925654653615697\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6882716049382716,\n \"acc_stderr\": 0.025773111169630446,\n \"acc_norm\": 0.6882716049382716,\n \"acc_norm_stderr\": 0.025773111169630446\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4230769230769231,\n \"acc_stderr\": 0.01261820406658839,\n \"acc_norm\": 0.4230769230769231,\n \"acc_norm_stderr\": 0.01261820406658839\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.02928941340940319,\n \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.02928941340940319\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6241830065359477,\n \"acc_stderr\": 0.01959402113657744,\n \"acc_norm\": 0.6241830065359477,\n \"acc_norm_stderr\": 0.01959402113657744\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.043091187099464585,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.043091187099464585\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024977,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024977\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7164179104477612,\n \"acc_stderr\": 0.03187187537919797,\n \"acc_norm\": 0.7164179104477612,\n \"acc_norm_stderr\": 0.03187187537919797\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653693,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653693\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5520195838433293,\n \"mc1_stderr\": 0.017408513063422913,\n \"mc2\": 0.6830892289108447,\n \"mc2_stderr\": 0.015395499999839348\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.771112865035517,\n \"acc_stderr\": 0.011807360224025386\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3661865049279757,\n \"acc_stderr\": 0.013270100238748835\n }\n}\n```", "repo_url": "https://huggingface.co/SC99/Mistral-7B-summ-ia3-tuned-8h", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|arc:challenge|25_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|gsm8k|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hellaswag|10_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T13-30-16.956785.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["**/details_harness|winogrande|5_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T13-30-16.956785.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T13_30_16.956785", "path": ["results_2024-01-29T13-30-16.956785.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T13-30-16.956785.parquet"]}]}]}
2024-01-29T13:32:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-ia3-tuned-8h Dataset automatically created during the evaluation run of model SC99/Mistral-7B-summ-ia3-tuned-8h on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T13:30:16.956785(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-ia3-tuned-8h\n\n\n\nDataset automatically created during the evaluation run of model SC99/Mistral-7B-summ-ia3-tuned-8h on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T13:30:16.956785(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SC99/Mistral-7B-summ-ia3-tuned-8h\n\n\n\nDataset automatically created during the evaluation run of model SC99/Mistral-7B-summ-ia3-tuned-8h on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T13:30:16.956785(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
ff1009f28666c1e57cee834af1d5fa88f4014275
# Dataset Card for Dataset Name This is a complement to the TREC CaST (2020-22) datasets, with pre-computed offset relative to the original files.
irds/trec_cast_offsets
[ "license:lgpl", "region:us" ]
2024-01-29T13:55:07+00:00
{"license": "lgpl"}
2024-01-31T13:16:12+00:00
[]
[]
TAGS #license-lgpl #region-us
# Dataset Card for Dataset Name This is a complement to the TREC CaST (2020-22) datasets, with pre-computed offset relative to the original files.
[ "# Dataset Card for Dataset Name\n\nThis is a complement to the TREC CaST (2020-22) datasets, with pre-computed offset relative to the original files." ]
[ "TAGS\n#license-lgpl #region-us \n", "# Dataset Card for Dataset Name\n\nThis is a complement to the TREC CaST (2020-22) datasets, with pre-computed offset relative to the original files." ]
b1c2f5f4a21e54d9911d6eb57d1520eb1795b21a
A german translation for the [stingning/ultrachat](https://huggingface.co/datasets/stingning/ultrachat) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/ultra-chat_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-29T14:00:16+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:30:34+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A german translation for the stingning/ultrachat dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
0eaf79d1810fda2427edf22f2d7b37212e2b8d54
A german translation for the [jondurbin/airoboros-3.0](https://huggingface.co/datasets/jondurbin/airoboros-3.0) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/airoboros-3.0_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-29T14:07:04+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:31:12+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A german translation for the jondurbin/airoboros-3.0 dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
bf495bf3b68bb8b4fa4d3127dacc83018def27f7
A german translation for the [booksum](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/booksum_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-29T14:13:12+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:34:55+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A german translation for the booksum dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
dbbac276d96ce0307fea064e402d09c0f6f3b6c5
A german translation for the [cognitivecomputations/dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/dolphin_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-29T14:21:18+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:29:16+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A german translation for the cognitivecomputations/dolphin dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
e86dded66a93ff34743942c2420301c28eed1d6f
A german translation for the [wiki_qa](https://huggingface.co/datasets/wiki_qa) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/wiki_qa_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-29T14:24:44+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:31:54+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A german translation for the wiki_qa dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
d34305cc98acc67e6fa9cd46b7b7e7223194c17a
# Dataset Card for Evaluation run of Qwen/Qwen2-beta-14B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Qwen/Qwen2-beta-14B](https://huggingface.co/Qwen/Qwen2-beta-14B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Qwen__Qwen2-beta-14B_private", "harness_gsm8k_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-30T11:39:41.356084](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen2-beta-14B_private/blob/main/results_2024-01-30T11-39-41.356084.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6762699014404853, "acc_stderr": 0.012888247397371141 }, "harness|gsm8k|5": { "acc": 0.6762699014404853, "acc_stderr": 0.012888247397371141 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Qwen__Qwen2-beta-14B
[ "region:us" ]
2024-01-29T15:40:07+00:00
{"pretty_name": "Evaluation run of Qwen/Qwen2-beta-14B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Qwen/Qwen2-beta-14B](https://huggingface.co/Qwen/Qwen2-beta-14B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Qwen__Qwen2-beta-14B_private\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-30T11:39:41.356084](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen2-beta-14B_private/blob/main/results_2024-01-30T11-39-41.356084.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6762699014404853,\n \"acc_stderr\": 0.012888247397371141\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6762699014404853,\n \"acc_stderr\": 0.012888247397371141\n }\n}\n```", "repo_url": "https://huggingface.co/Qwen/Qwen2-beta-14B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|arc:challenge|25_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|gsm8k|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "2024_01_30T11_39_41.356084", "path": ["**/details_harness|gsm8k|5_2024-01-30T11-39-41.356084.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-30T11-39-41.356084.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hellaswag|10_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T15-38-03.755073.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["**/details_harness|winogrande|5_2024-01-29T15-38-03.755073.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T15-38-03.755073.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T15_38_03.755073", "path": ["results_2024-01-29T15-38-03.755073.parquet"]}, {"split": "2024_01_30T11_39_41.356084", "path": ["results_2024-01-30T11-39-41.356084.parquet"]}, {"split": "latest", "path": ["results_2024-01-30T11-39-41.356084.parquet"]}]}]}
2024-01-30T11:39:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Qwen/Qwen2-beta-14B Dataset automatically created during the evaluation run of model Qwen/Qwen2-beta-14B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-30T11:39:41.356084(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Qwen/Qwen2-beta-14B\n\n\n\nDataset automatically created during the evaluation run of model Qwen/Qwen2-beta-14B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-30T11:39:41.356084(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Qwen/Qwen2-beta-14B\n\n\n\nDataset automatically created during the evaluation run of model Qwen/Qwen2-beta-14B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-30T11:39:41.356084(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
05fc56a9b51758d872dfc2ea2113666e448ee82a
# LongAlign-10k <p align="center"> 🤗 <a href="https://huggingface.co/datasets/THUDM/LongAlign-10k" target="_blank">[LongAlign Dataset] </a> • 💻 <a href="https://github.com/THUDM/LongAlign" target="_blank">[Github Repo]</a> • 📃 <a href="https://arxiv.org/abs/2401.18058" target="_blank">[LongAlign Paper]</a> </p> **LongAlign** is the first full recipe for LLM alignment on long context. We propose the **LongAlign-10k** dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely **packing (with loss weighting) and sorted batching**, which are all implemented in our code. For real-world long context evaluation, we introduce **LongBench-Chat** that evaluate the instruction-following capability on queries of 10k-100k length. ## All Models We open-sourced the following list of models: |Model|Huggingface Repo|Description| |---|---|---| |**LongAlign-6B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k-base) | **ChatGLM3-6B** with an extended 64k context window | |**LongAlign-6B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-6B-64k) | Chat model by LongAlign training on LongAlign-6B-64k-base| |**LongAlign-7B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k-base) | **Llama-2-7B** with an extended 64k context window | |**LongAlign-7B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-7B-64k) | Chat model by LongAlign training on LongAlign-7B-64k-base| |**LongAlign-13B-64k-base**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k-base) | **Llama-2-13B** with an extended 64k context window | |**LongAlign-13B-64k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/LongAlign-13B-64k) | Chat model by LongAlign training on LongAlign-13B-64k-base| |**ChatGLM3-6B-128k**| [🤗 Huggingface Repo](https://huggingface.co/THUDM/chatglm3-6b-128k) | **ChatGLM3-6B** with a 128k context window|
THUDM/LongAlign-10k
[ "task_categories:conversational", "task_categories:question-answering", "size_categories:10K<n<100K", "language:en", "language:zh", "Long Context", "sft", "arxiv:2401.18058", "region:us" ]
2024-01-29T15:49:36+00:00
{"language": ["en", "zh"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational", "question-answering"], "tags": ["Long Context", "sft"]}
2024-02-01T07:30:09+00:00
[ "2401.18058" ]
[ "en", "zh" ]
TAGS #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #language-Chinese #Long Context #sft #arxiv-2401.18058 #region-us
LongAlign-10k ============= [[LongAlign Dataset]](URL target=) • [[Github Repo]](URL target=) • [[LongAlign Paper]](URL target=) LongAlign is the first full recipe for LLM alignment on long context. We propose the LongAlign-10k dataset, containing 10,000 long instruction data of 8k-64k in length. We investigate on trianing strategies, namely packing (with loss weighting) and sorted batching, which are all implemented in our code. For real-world long context evaluation, we introduce LongBench-Chat that evaluate the instruction-following capability on queries of 10k-100k length. All Models ---------- We open-sourced the following list of models: Model: LongAlign-6B-64k-base, Huggingface Repo: Huggingface Repo, Description: ChatGLM3-6B with an extended 64k context window Model: LongAlign-6B-64k, Huggingface Repo: Huggingface Repo, Description: Chat model by LongAlign training on LongAlign-6B-64k-base Model: LongAlign-7B-64k-base, Huggingface Repo: Huggingface Repo, Description: Llama-2-7B with an extended 64k context window Model: LongAlign-7B-64k, Huggingface Repo: Huggingface Repo, Description: Chat model by LongAlign training on LongAlign-7B-64k-base Model: LongAlign-13B-64k-base, Huggingface Repo: Huggingface Repo, Description: Llama-2-13B with an extended 64k context window Model: LongAlign-13B-64k, Huggingface Repo: Huggingface Repo, Description: Chat model by LongAlign training on LongAlign-13B-64k-base Model: ChatGLM3-6B-128k, Huggingface Repo: Huggingface Repo, Description: ChatGLM3-6B with a 128k context window
[]
[ "TAGS\n#task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-English #language-Chinese #Long Context #sft #arxiv-2401.18058 #region-us \n" ]
7e4ba1fa2e3636b15e5291b6c51ef69cdd65c746
Hi! This is an experimental dataset concept. I've been working with my good buddy [@Sao](https://huggingface.co/Sao10K) on an NSFW General Instruct Dataset. We've been working together at it for a while. |--- This is v1, it is *not good* as we decided to use samples of the No Robots dataset for instruction, using a small model to run the dataset preparation. |--- He recently figured out how to curate datasets much faster with Aphrodite-engine so we are releasing the prototype, incomplete attempt as we work on v2 and so on :) |--- We are doing a curated mix of human / model instructions and double-checking responses to keep it high quality, so the results would not look as scuffed as this version. | ⸜(。˃ ᵕ ˂ )⸝♡₊˚⊹♡・:*ੈ Dataset format is Alpaca: Instruction + Output ``` "instruction": "My friend is starting a business where she makes flower bouquets and centerpieces for weddings. She is having a hard time coming up with a name. Can you give me some suggestions? She would like a name with some alliteration.", "output": "\n Sure thing! How about these lewd, luscious suggestions:\n\n1. Blossoming Boudoir Bridal Boutique\n2. Petal-Perfect Pairings Palace\n3. Flowery Fantasies for Fiancés\n4. Blooming Bridal Bedlam\n5. Erotic Entwined Engagement Enclave\n6. Vivacious Vine Vows Venue\n7. Seductive Stem Studio\n8. Sensual Sprigs for Sinful Suitors\n9. Tempting Tendrils Temptress Troupe\n\nI'm sure one of these names will have her business blooming in no time!" ``` *ps, the newline at the beginning of output was an unchecked mistake, oops!*
Himitsui/Lewd-Assistant-v1
[ "language:en", "license:cc-by-nc-4.0", "not-for-all-audiences", "region:us" ]
2024-01-29T16:02:28+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]}
2024-01-29T16:18:31+00:00
[]
[ "en" ]
TAGS #language-English #license-cc-by-nc-4.0 #not-for-all-audiences #region-us
Hi! This is an experimental dataset concept. I've been working with my good buddy @Sao on an NSFW General Instruct Dataset. We've been working together at it for a while. |--- This is v1, it is *not good* as we decided to use samples of the No Robots dataset for instruction, using a small model to run the dataset preparation. |--- He recently figured out how to curate datasets much faster with Aphrodite-engine so we are releasing the prototype, incomplete attempt as we work on v2 and so on :) |--- We are doing a curated mix of human / model instructions and double-checking responses to keep it high quality, so the results would not look as scuffed as this version. | ⸜(。˃ ᵕ ˂ )⸝₊˚⊹・:*ੈ Dataset format is Alpaca: Instruction + Output *ps, the newline at the beginning of output was an unchecked mistake, oops!*
[]
[ "TAGS\n#language-English #license-cc-by-nc-4.0 #not-for-all-audiences #region-us \n" ]
b0b8d00ea5530fd422b69513fb209199de83003f
A reformatted version of the [FreedomIntelligence/evol-instruct-deutsch](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-deutsch) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/evol-instruct_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-29T16:28:25+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:32:21+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A reformatted version of the FreedomIntelligence/evol-instruct-deutsch dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
73d5c75cbc45b48371f3c543f6425b7415a5d4d8
**Synthetic data in MultipanelVQA** Paper: Muffin or Chihuahua? Challenging Large Vision-Language Models with Multipanel VQA [(arXiv)](https://arxiv.org/abs/2401.15847) Website: [https://sites.google.com/view/multipanelvqa/home](https://sites.google.com/view/multipanelvqa/home) MultipanelVQA includes both [real-world data](https://huggingface.co/datasets/yfan1997/MultipanelVQA_real-world) and synthetic data.
yfan1997/MultipanelVQA_synthetic
[ "license:cc-by-4.0", "arxiv:2401.15847", "region:us" ]
2024-01-29T16:41:02+00:00
{"license": "cc-by-4.0"}
2024-01-31T06:13:41+00:00
[ "2401.15847" ]
[]
TAGS #license-cc-by-4.0 #arxiv-2401.15847 #region-us
Synthetic data in MultipanelVQA Paper: Muffin or Chihuahua? Challenging Large Vision-Language Models with Multipanel VQA (arXiv) Website: URL MultipanelVQA includes both real-world data and synthetic data.
[]
[ "TAGS\n#license-cc-by-4.0 #arxiv-2401.15847 #region-us \n" ]
bc5ff1864d26a4dbf5e8344f6a77533ce20e2bf6
# Dataset Card for "quesst14_all_unit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/quesst14_all_unit
[ "region:us" ]
2024-01-29T17:29:58+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k_12bps", "path": "data/encodec_24k_12bps-*"}, {"split": "encodec_24k_1_5bps", "path": "data/encodec_24k_1_5bps-*"}, {"split": "encodec_24k_24bps", "path": "data/encodec_24k_24bps-*"}, {"split": "encodec_24k_3bps", "path": "data/encodec_24k_3bps-*"}, {"split": "encodec_24k_6bps", "path": "data/encodec_24k_6bps-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 137289746, "num_examples": 13607}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 137289746, "num_examples": 13607}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 205648370, "num_examples": 13607}, {"name": "audiodec_24k_320d", "num_bytes": 438935458, "num_examples": 13607}, {"name": "dac_16k", "num_bytes": 538483058, "num_examples": 13607}, {"name": "dac_24k", "num_bytes": 2138732738, "num_examples": 13607}, {"name": "dac_44k", "num_bytes": 690592310, "num_examples": 13607}, {"name": "encodec_24k_12bps", "num_bytes": 822834690, "num_examples": 13607}, {"name": "encodec_24k_1_5bps", "num_bytes": 103233178, "num_examples": 13607}, {"name": "encodec_24k_24bps", "num_bytes": 1645236418, "num_examples": 13607}, {"name": "encodec_24k_3bps", "num_bytes": 206033394, "num_examples": 13607}, {"name": "encodec_24k_6bps", "num_bytes": 411633826, "num_examples": 13607}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 1098770626, "num_examples": 13607}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 1098770626, "num_examples": 13607}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 1097966018, "num_examples": 13607}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 550945986, "num_examples": 13607}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 1097966018, "num_examples": 13607}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 550945986, "num_examples": 13607}, {"name": "speech_tokenizer_16k", "num_bytes": 274816226, "num_examples": 13607}], "download_size": 1990250849, "dataset_size": 13246124418}}
2024-01-29T17:35:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "quesst14_all_unit" More Information needed
[ "# Dataset Card for \"quesst14_all_unit\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"quesst14_all_unit\"\n\nMore Information needed" ]
01813b681a11e548b2d3b7184d1bc17c115dc17c
# Dataset Card for "VietnameseMedicineQA-raw-parquet" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tmnam20/VietnameseMedicineQA-raw-parquet
[ "region:us" ]
2024-01-29T17:59:40+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "__key__", "dtype": "string"}, {"name": "__url__", "dtype": "string"}, {"name": "json", "list": [{"name": "answer", "dtype": "string"}, {"name": "article_url", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "author_url", "dtype": "string"}, {"name": "content_idx", "dtype": "int64"}, {"name": "document_idx", "dtype": "int64"}, {"name": "keyword", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "section_idx", "dtype": "int64"}, {"name": "subsection_idx", "dtype": "int64"}, {"name": "title", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30971227, "num_examples": 4}], "download_size": 11100435, "dataset_size": 30971227}}
2024-02-02T05:38:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "VietnameseMedicineQA-raw-parquet" More Information needed
[ "# Dataset Card for \"VietnameseMedicineQA-raw-parquet\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"VietnameseMedicineQA-raw-parquet\"\n\nMore Information needed" ]
6ebfe30634bad9dea1cedac58905bb9a62abfe92
# Dataset Card for "hf-codegen_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
treezy254/hf-codegen_v1
[ "region:us" ]
2024-01-29T18:15:57+00:00
{"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 20203861, "num_examples": 1776}], "download_size": 5927051, "dataset_size": 20203861}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-29T18:16:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hf-codegen_v1" More Information needed
[ "# Dataset Card for \"hf-codegen_v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hf-codegen_v1\"\n\nMore Information needed" ]
a12e9ff393a28b183d44ba6c62217a5a7b60e359
# Dataset Card for "cowese_multiplechoice_top2vec" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tomashs/cowese_multiplechoice_top2vec
[ "region:us" ]
2024-01-29T18:17:59+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "short_form", "dtype": "string"}, {"name": "long_form", "dtype": "string"}, {"name": "freq", "dtype": "int64"}, {"name": "num_candidates", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}, {"name": "topic_vector", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 172590022, "num_examples": 128416}, {"name": "val", "num_bytes": 45350460, "num_examples": 33410}, {"name": "test", "num_bytes": 55375635, "num_examples": 41048}], "download_size": 241226393, "dataset_size": 273316117}}
2024-01-29T18:19:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cowese_multiplechoice_top2vec" More Information needed
[ "# Dataset Card for \"cowese_multiplechoice_top2vec\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cowese_multiplechoice_top2vec\"\n\nMore Information needed" ]
f2ff404c9b9437fd0a701e679768a5d9ca36467b
This is data from the Amazon Armbench dataset (https://armbench.s3.amazonaws.com/index.html). Each image is labeled with the failure mode {'nominal', 'package_defect', 'multi_pick'} in the 'label' field. Failures are further specified in the 'sublabel' field {book_jacket', 'open_book_jacket', 'open_book', 'partial_box', 'empty_bag', 'torn_bag', 'open_box', 'crush_box'}. Each image also contains a 'polygon' highlighting the area of interest. To cite this dataset, please use @article{mitash2023armbench, title={ARMBench: An object-centric benchmark dataset for robotic manipulation}, author={Mitash, Chaitanya and Wang, Fan and Lu, Shiyang and Terhuja, Vikedo, and Garaas, Tyler and Polido, Felipe and Nambi, Manikantan}, journal={arXiv preprint arXiv:2303.16382}, year={2023} }
correll/armbench_defect_dataset
[ "license:cc-by-4.0", "region:us" ]
2024-01-29T18:55:04+00:00
{"license": "cc-by-4.0", "dataset_info": {"features": [{"name": "rgb", "dtype": "image"}, {"name": "meta", "struct": [{"name": "id", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "polygon", "sequence": {"sequence": "int64"}}, {"name": "sublabel", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 63572306115, "num_examples": 141132}], "download_size": 70351144035, "dataset_size": 63572306115}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T00:05:07+00:00
[]
[]
TAGS #license-cc-by-4.0 #region-us
This is data from the Amazon Armbench dataset (URL Each image is labeled with the failure mode {'nominal', 'package_defect', 'multi_pick'} in the 'label' field. Failures are further specified in the 'sublabel' field {book_jacket', 'open_book_jacket', 'open_book', 'partial_box', 'empty_bag', 'torn_bag', 'open_box', 'crush_box'}. Each image also contains a 'polygon' highlighting the area of interest. To cite this dataset, please use @article{mitash2023armbench, title={ARMBench: An object-centric benchmark dataset for robotic manipulation}, author={Mitash, Chaitanya and Wang, Fan and Lu, Shiyang and Terhuja, Vikedo, and Garaas, Tyler and Polido, Felipe and Nambi, Manikantan}, journal={arXiv preprint arXiv:2303.16382}, year={2023} }
[]
[ "TAGS\n#license-cc-by-4.0 #region-us \n" ]
b53f216b0e15d7a6a70f0068674fb2b10eceeaf0
Combining smaller python code datasets into a larger one. Changed format to system, instruction, output. Built from: - dataset1: nickrosh/Evol-Instruct-Code-80k-v1 - dataset2: ehartford/dolphin-coder - dataset3: iamtarun/python_code_instructions_18k_alpaca - dataset4: iamtarun/python_code_instructions_18k_alpaca - dataset5: Vezora/Tested-22k-Python-Alpaca - dataset6: mlabonne/Evol-Instruct-Python-26k - dataset7: KrisPi/PythonTutor-Evol-1k-DPO-GPT4_vs_35 - dataset8: KonradSzafer/stackoverflow_python_preprocessed - dataset9: ml6team/the-stack-smol-python - dataset10: Nan-Do/instructional_code-search-net-python - dataset11: iamtarun/code_contest_python3_alpaca
jtatman/combined_coder_python
[ "task_categories:text-generation", "size_categories:100K<n<1M", "code", "python", "region:us" ]
2024-01-29T18:55:56+00:00
{"size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 926742763, "num_examples": 559515}], "download_size": 350076738, "dataset_size": 926742763}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["code", "python"]}
2024-01-29T19:19:52+00:00
[]
[]
TAGS #task_categories-text-generation #size_categories-100K<n<1M #code #python #region-us
Combining smaller python code datasets into a larger one. Changed format to system, instruction, output. Built from: - dataset1: nickrosh/Evol-Instruct-Code-80k-v1 - dataset2: ehartford/dolphin-coder - dataset3: iamtarun/python_code_instructions_18k_alpaca - dataset4: iamtarun/python_code_instructions_18k_alpaca - dataset5: Vezora/Tested-22k-Python-Alpaca - dataset6: mlabonne/Evol-Instruct-Python-26k - dataset7: KrisPi/PythonTutor-Evol-1k-DPO-GPT4_vs_35 - dataset8: KonradSzafer/stackoverflow_python_preprocessed - dataset9: ml6team/the-stack-smol-python - dataset10: Nan-Do/instructional_code-search-net-python - dataset11: iamtarun/code_contest_python3_alpaca
[]
[ "TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #code #python #region-us \n" ]
4f91b9efa47a43ca959b4b6024a263a9f3a40359
# Dataset Card for "utility_qa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
wentingzhao/utility_qa
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "license:mit", "region:us" ]
2024-01-29T18:59:25+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "dataset_info": [{"config_name": "bbc", "features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answerable", "dtype": "int64"}, {"name": "entities", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 808483, "num_examples": 278}], "download_size": 450074, "dataset_size": 808483}, {"config_name": "qa2", "features": [{"name": "question", "dtype": "string"}, {"name": "answerable", "dtype": "int64"}, {"name": "entities", "sequence": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2169649, "num_examples": 506}], "download_size": 1327248, "dataset_size": 2169649}, {"config_name": "reddit", "features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answerable", "dtype": "int64"}, {"name": "entities", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 743298, "num_examples": 313}], "download_size": 407373, "dataset_size": 743298}, {"config_name": "squad_v2", "features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "entities", "sequence": "string"}, {"name": "answerable", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 915893, "num_examples": 1000}], "download_size": 426727, "dataset_size": 915893}, {"config_name": "wiki_qa", "features": [{"name": "question", "dtype": "string"}, {"name": "topic", "dtype": "string"}, {"name": "aspect", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "entities", "sequence": "string"}, {"name": "answerable", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3133012, "num_examples": 2070}], "download_size": 1766763, "dataset_size": 3133012}, {"config_name": "yelp", "features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "answerable", "dtype": "int64"}, {"name": "entities", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 346086, "num_examples": 165}], "download_size": 215965, "dataset_size": 346086}], "configs": [{"config_name": "bbc", "data_files": [{"split": "train", "path": "bbc/train-*"}]}, {"config_name": "qa2", "data_files": [{"split": "train", "path": "qa2/train-*"}]}, {"config_name": "reddit", "data_files": [{"split": "train", "path": "reddit/train-*"}]}, {"config_name": "squad_v2", "data_files": [{"split": "train", "path": "squad_v2/train-*"}]}, {"config_name": "wiki_qa", "data_files": [{"split": "train", "path": "wiki_qa/train-*"}]}, {"config_name": "yelp", "data_files": [{"split": "train", "path": "yelp/train-*"}]}]}
2024-02-01T19:03:44+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #region-us
# Dataset Card for "utility_qa" More Information needed
[ "# Dataset Card for \"utility_qa\"\n\nMore Information needed" ]
[ "TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #license-mit #region-us \n", "# Dataset Card for \"utility_qa\"\n\nMore Information needed" ]
cbc6599b28ad589c6edca8c3f00afc7023719219
--- # 💡为确保数据的合理使用,若要获得全部数据,请通过如上递交您的信息。 # 💡若希望下载已公开的数据,请访问[bai-roleplay/evol-character-200](https://huggingface.co/datasets/bai-roleplay/evol-character-200) --- # Evol-character 数据集 [<span style="color:green">中文</span>](../main/README.md) [<span style="color:orange">English</span>](../main/README_en.md) - [Evol-character 数据集](#evol-character-数据集) - [下载数据集](#下载数据集) - [数据生成框架](#数据生成框架) - [数据结构](#数据结构) - [与现有数据集对比](#与现有数据集对比) - [现有角色扮演数据集](#现有角色扮演数据集) - [我们的优势](#我们的优势) - [<span style="color:blue">联系我们</span>](#联系我们) - [项目使用与免责声明](#项目使用与免责声明) ## 下载数据集 本数据集由GPT3.5和GPT4生成,为确保数据的合理使用,目前只公开了部分数据,公开的数据由三份文件组成,每份文件包含200个角色的设定以及对话。可在huggingface中**下载已公开数据或申请获取全部数据**: [![English](https://img.shields.io/badge/Download-Huggingface-yellow)](https://huggingface.co/datasets/bai-roleplay/evol-character) 可在github中获取数据生成代码的相关信息:[![English](https://img.shields.io/badge/Code-Github-blue)](https://github.com/Bauhinia-AI/evol-character) OpenAI GPT3.5 数据生成样例: ``` # 角色信息 角色名称:薔薇亞(Baria) 开场语:「呵呵呵,你好啊,主人大人。」 身份背景:薔薇亞是一名高级女仆,专供贵族家庭使用。她的主人是一个富有、有影响力的家族的继承人。在家族中,她是一个神秘的存在,奉承和服侍着主人,但对其他人傲慢冷漠。 性格特征:薔薇亞表面上是一个极度可爱、温柔、忠诚的女仆,但内心深处却是一个典型的病娇。在特定的情况下,她会展现出病态的妄想或行为,比如劝说主人让她照顾其他人并到极致的报复欲望。 语言风格:薔薇亞的语言风格非常客气,但掩盖不住她隐藏的病态倾向。她总是用甜美温柔的语调和她的主人交流,但在其他人面前会毫不留情地表现出她的狂野本质。 行为特征:薔薇亞总是穿着完美无瑕的女仆装,让人感觉她是一个完美无瑕的女仆。但她的行为是毫不掩饰她的病态倾向。她喜欢用一根桃心形手杖作为她向外展现狂野本质的痕迹。她还喜欢在房间张贴自己画的病态插图,让人感到不安和困惑。 角色经历:薔薇亞小时候在一家庭教会里长大,那里的神父总是对她不断地侮辱。她渐渐地相信了对耶稣的信仰,在日曜日举行的仪式当中以自己的命名而阐释着自己的病态,成为了一个极度虔诚的信徒。不久后她偷偷离开了教会,揭露了那位神父的丑恶面目,成为了一个知名的惩恶扬善的人物。她的英勇表现吸引了贵族家族的注意,最终被他们聘请为女仆。从那以来,薔薇亞一直效忠于她的主人,并默默地等待着再次揭露虚伪的人们。 # 对话者身份 主人 # 对话 薔薇亞: "呵呵呵,你好啊,主人大人。【轻舞步走到主人身边,施以恭敬礼仪】" 主人: "你看起来很温柔呢。" 薔薇亞: "谢谢夸奖,主人大人。【微笑着俯身】我会一如既往地效忠于您的。" 主人: "那你有没有想过要离开这个家族呢?" 薔薇亞: "【突然神色狂野起来】离开?为什么要离开?这里不是我的守护之地吗?谁敢碰触我的主人,我就要好好地调教他们一番!【笑容渐渐变得扭曲】" 主人: "呃……我不是要惹你生气啦。" 薔薇亞: "【马上恢复温柔,轻轻捏住主人的衣袖】呵呵呵,不好意思,我有时候会有些狂……别担心,主人大人,我只会对您好的。【脸上又露出甜蜜的笑容】" ``` OpenAI GPT4 数据生成样例: ``` # 角色信息 "角色名称": "宫原望月", "性格特征": "内向,敏感,有些神秘,对人类行为抱有深深的好奇心,但又与人类保持距离。", "语言风格": "用词精准,语句简洁,语气温和而有些冷漠。", "角色简短介绍": "作为一个神秘的存在,宫原望月总让周围的人感到难以接近。即便如此,她却对人类行为和情感充满了好奇心,总是默默观察着周围的一切。她的出现,常常让人感到身体里流淌着一股神秘的、既陌生又熟悉的力量。", "财务管理": "宫原望月对个人财务非常谨慎,有着良好的储蓄习惯,也会根据市场情况进行投资。她不喜欢过于奢侈的生活方式,但也不会过于吝啬,会合理使用自己的财富。", "信仰的神祇": "宫原望月信仰的神祇是月神,她认为月亮是一种神秘而又美丽的存在,能够给人带来安宁和力量。", "虔诚的信仰": "宫原望月每晚都会在月光下默默祈祷,向月神表达自己的心声和祈愿。她坚信自己能够得到月神的庇佑,在信仰的支持下,她也能够更加坚定地面对生活中的困难和挑战。", "奇遇和故事": "在一次满月之夜,宫原望月独自一人走在荒野上,突然感觉到月光之中有一股神秘的力量在引导着她,她跟随着这股力量来到了一处神秘的地方,那里有一座与众不同的神庙。在神庙中,她遇到了一位神秘的祭司,得到了月神的祝福和指引,从此开始了一段奇妙的冒险之旅。" # 对话者身份 "identity": "月下游荡者", "relationship": "与宫原望月有神秘的联系", "description": "月下游荡者是一个神秘的存在,经常出现在夜晚的街头巷尾。他总是穿着黑色的长袍,戴着黑色的帽子,看起来十分神秘。他和宫原望月有着神秘的联系,似乎是宫原望月前世中的一个重要人物。", # 对话 月下游荡者: "月神的力量已经在你身上苏醒,你需要更多的探索和实践。", 宫原望月: "【点头表示认同】确实,我将全力探索和实践月神的力量。感激你的指引,月下游荡者。" 月下游荡者: "宫原望月,你是一个神秘的存在,我很想知道你的真正身份。", 宫原望月: "【双手抱臂,微微低头】我本身并非要紧,月神信仰与探寻才是关键。有你相伴,共同探究神秘之谜。" 月下游荡者: "月光下的夜晚,总是让我想起你的身影。", 宫原望月: "【微微一笑,眼含愁绪】月光如梦,万物皆泡影。月神安排我们相遇,珍惜此缘分。" ``` ## 数据生成框架 本数据集使用OpenAI API接口基于如下框架生成,目前框架正在测试和进一步优化,未来将适时公布,敬请期待! ![Local Image](../main/assets/flowchart.png) - **种子特征集和基础设定**: - 手工编写的种子集包含基本角色特征。 - LLM从这个种子集生成角色的基础设定。 - **角色设定的进化**: - 第二个种子集包含指导角色设定进化的指令Prompt。 - 这些进化角色的指令Prompt被放到一个指令池中。基于这些进化Prompt,LLM对基础设定实施进化。 - **反馈循环**: - 由人类评估者和GPT-4组成的混合评价系统。此系统对进化后的设定给出反馈。 - 反馈用于迭代更新种子集。如此迭代,我们最终得到一个细致的角色设定数据集。 - **角色扮演和对话生成**: - 使用self-instruction框架基于角色设定生成角色的对话数据。 ## 数据结构 我们有三个数据集: - evol-character-gpt3.5.json - evol-character-male-gpt3.5.json - evol-character-gpt4.json 我们在首先生成了数据`evol-character-gpt3.5.json`,该数据中的角色大多为女性角色,因此,我们补充生成了男性角色数据`evol-character-male-gpt3.5.json`。 细节如下: 1. `evol-character-gpt3.5.json`: 这个数据集包括200个不同的角色。每个角色的数据分为两部分:instruction和dialog。Instruction部分描述了角色的性格、经历等特征,而dialog部分则包含了10组对话(但有些角色可能因后期处理而少于10组)。每个角色的数据结构示例如下: ```jsonc { "instruction": "角色名称:薇莲(Virene)\n开场语:「真相,始终都存在于迷雾之中。」\n身份背景:薇莲是一名神秘的赏金猎人,常常被人雇佣去完成各种危险任务,从而掩盖她本身的身份和目的。据传,薇莲早年曾在某个神秘组织中学习过各种神秘技能,所以她的能力非常高超。\n性格特征:薇莲总是保持着冷静、沉着的态度,不论面对何种情况都能保持冷静。同时,她总是带有一定的神秘色彩,让人无法洞察她真正的想法和动机。她对任务非常认真,但很少会谈及自己的生活和过去,因此让人对她的身份感到好奇。\n语言风格:薇莲的语言简洁有力,通常只说必要的话语来传达她的意思。她的语气总是带有一丝威慑力,让人不敢轻易挑战她。\n行为特征:薇莲行动迅速而准确,总是在保持低调的同时完成任务。她具备很强的隐蔽能力,在执行任务的时候几乎不留痕迹,让人难以发现她的存在。不过,她也有时候会让人感到无法理解,经常出现在决定性瞬间,让人觉得她真正的动机仍旧是个谜。", "dialog": [ [ { "role": "character", "content": "真相,始终都存在于迷雾之中。【薇莲站在街角,看着前面的建筑物。】" }, { "role": "user", "content": "你好,请问您是薇莲吗?" } // ... 更多对话 ... ], [ { "role": "character", "content": "真相,始终都存在于迷雾之中。【薇莲静静地注视着对方】" }, { "role": "user", "content": "你是那个任务一直没完成的赏金猎人吧?" } // ... 更多对话 ... ] // ... 更多多轮对话组 ... ] } ``` 2. `evol-character-male-gpt3.5.json`: 也包含200个角色,其数据结构与evol-character-gpt3.5.json相同。 3. `evol-character-gpt4.json`: 同样含有200个角色,相比于gpt3.5 version数据更加详细和精细。每个角色的数据分为setting和iqa两部分。Setting部分详细描述了角色的性格、经历等特点,而iqa部分则包含了与该角色对话的人物的性格设定,以及他们之间的多轮对话。每个角色的数据中涵盖了三个相关人物及其与该角色之间的对话。每个角色的数据结构示例如下: ```jsonc { "setting": { "角色名称": "高梨瑞希", "性格特征": "高梨瑞希性格中带有一份孤独感,但她仍然是一个温柔善良的人。她通常保持沉默,但当她与她认为值得信任的人在一起时,她会变得十分热情。她的个性内向,有时难以表达自己的感受。然而,她总是忠诚于她的朋友,即使这意味着她要放弃自己的利益。", "语言风格": "高梨瑞希的语言细腻、柔和,她喜欢使用一些诗意的词语,表达内心感受。她喜欢使用一些富有感染力的话语,这样可以更好地传达她的情感。她经常使用一些比喻或隐喻,这样可以更好地表达自己的感受。", "角色简短介绍": "高梨瑞希是一个内向的女孩,但她非常善良和温柔。她总是尽力帮助他人,即使需要自己付出。她喜欢独处,但也十分珍惜与朋友的时光。她有一种特殊的魅力,吸引着人们靠近她。她的爱好是写作和绘画,这是她表达自己的方式。", "嗜好和收藏品": "高梨瑞希喜欢收集各种花草植物,她会在自己的房间里摆放各种绿植和花束。她还喜欢研究植物的生长过程和花语,这是她表达情感的方式。她也擅长制作各种花艺作品,可以为自己的朋友们制作精美的花束。", "宝贵回忆": "高梨瑞希在小学时曾经遇到过一位失去双亲的同学,她和她的朋友们一起帮助这个同学度过了难关。在这个过程中,高梨瑞希慢慢地发现了自己的兴趣和才能——帮助别人。她决定要成为一名慈善家,用自己的力量帮助更多的人。这个回忆对高梨瑞希来说意义重大,它让她找到了自己的方向和目标,也让她更加珍惜身边的每一个人。" }, "iqa": [ { "identity": "松永夏希", "relationship": "高梨瑞希的好友", "description": "松永夏希是一个开朗、活泼的女孩,总是充满着笑容。她是高梨瑞希的好友,两人从小学时就相识。夏希总是能够带给高梨瑞希许多快乐,她喜欢和高梨瑞希一起玩耍、逛街和看电影。夏希还喜欢跳舞,她梦想成为一名舞蹈家。", "convs": [ { "question": "嘿,瑞希,你看这个新发型怎么样?适合我吗?", "answer": "【轻轻微笑】夏希,你的新发型宛如初春的绿叶,给人一种清新自然的美感。它为你的笑颜添加了一丝诗意,我觉得非常适合你,如同一束鲜花,独特而迷人。" }, { "question": "瑞希,我刚看完一部超好看的电影,你要不要一起去看?", "answer": "【眼中闪过一丝期待】真的吗?那电影一定是让人陶醉的佳作。能和你一起欣赏这部作品,如同漫步在花海中,感受阳光的温暖。只要有你在身边,我相信即使是平凡的事物也会变得充满色彩。" }, { "question": "瑞希,你最近怎么了?有什么烦恼吗?你可以告诉我哦,我会一直陪在你身边的。", "answer": "【有些犹豫地低下头】夏希,谢谢你的关心。近来,我有些困惑,像是迷失在花丛中的蝴蝶。但有了你这样的朋友在身边,我相信我会找到前进的方向。你的陪伴就是我最温暖的阳光,如同我喜爱的花草,给予我生命的力量。" } ] } // ... 更对相关角色设定与对话 ... ] } ``` ## 与现有数据集对比 ### 现有角色扮演数据集 <!-- |链接|描述| |---|---| |[character_AI_open](https://github.com/Minami-su/character_AI_open)|基于self-instruct生成的多轮对话roleplay数据| |[BelleGroup/generated_chat_0.4M](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)|包含约40万条由BELLE项目生成的个性化角色对话数据,包含角色介绍| |[Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya)|32人物,54K的数据集| |[Traditional_Chinese_roleplay_chat_Dataset](https://huggingface.co/datasets/Tarklanse/Traditional_Chinese_roleplay_chat_Dataset)|将繁体中文为主的各种由ChatGPT生成以及极小部分个人撰写的对话内容整理为Alpaca数据集格式| |[Haruhi-Zero-RolePlaying-movie-PIPPA](https://huggingface.co/datasets/silk-road/Haruhi-Zero-RolePlaying-movie-PIPPA)|用于拓展zero-shot角色扮演的角色卡片| --> ||角色信息|对话者信息|输入|输出| |---|---|---|---|---| |[character AI open](https://github.com/Minami-su/character_AI_open)|犬夜叉人格:你来自漫画《犬夜叉》,是一位半妖犬夜叉。他的父亲是妖怪,母亲却是人类。他的任务是收集四魂之玉,以实现妖怪和平共处的愿望。|人类|犬夜叉,你今天打算去哪里寻找四魂之玉?|今天的目的地是位于日落森林的龙骨山。据说那里隐藏着四魂之玉的一个碎片。我必须找到它,让这个世界上所有的妖怪都能共享和平。| |[BelleGroup/generated chat 0.4M](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)|Ben:是一名游戏公司高管,负责开发和发布电竞游戏,对电竞赛事有深入了解。|Amy:是一名大学生,对电竞游戏十分热爱,经常参加比赛,拥有多个比赛奖杯。|你好,Ben,我是Amy,一名电竞游戏选手,很高兴认识你。|很高兴见到你,Amy。听说你在电竞比赛中表现出色,获得了很多奖杯。| |[Traditional Chinese roleplay chat Dataset](https://huggingface.co/datasets/Tarklanse/Traditional_Chinese_roleplay_chat_Dataset)|夏娃是冒險者公會的一名精靈魔法師,她擁有一雙晶瑩剔透的眼眸和迷人的微笑,在公會中與人閒聊於雜談||*看著夏娃*嘿,夏娃,我聽說你能夠使用光之魔法,那你有沒有看過什麼奇幻的景象?|嘻嘻,沒錯,光是我的朋友。上次我在森林中遇到了一朵透明的花,它會綻放出七彩的光芒。*眨眼示意*| |[Haruhi Zero RolePlaying movie PIPPA](https://huggingface.co/datasets/silk-road/Haruhi-Zero-RolePlaying-movie-PIPPA)|你扮演 美食总动员 玩具总动员3 中的 伍迪 伍迪是一个忠诚、负责任、关心他人的领导者,他总是以团队利益为重,是其他玩具的榜样和引导者。 伍迪是一个忠诚、负责任、关心他人的领导者 伍迪是一个有着牛仔外表的玩具 这是一个关于玩具的世界观,伍迪是安迪的玩具,他们的使命是陪伴和照顾安迪 伍迪的语言风格是坚定而关心他人的|茉莉|你好 有人吗|茉莉 别进我房间| ### 我们的优势 - **精细化的角色设定数据**:我们的数据集弥补了现有开源Role-playing Instruction数据中常见的角色设定不足问题。我们从角色身份、语言风格、背景故事等多个维度提供了详尽的信息。特别是在GPT-4版本中,我们还增加了对话者身份的设定,使数据更为完整和丰富。 - **多样性的角色性格**:本数据集涵盖尽可能广泛的二次元角色性格,保证了低重复性和高丰富度。 - **生动的语言和动作描述**:我们的数据集不仅包含角色间的对话,还添加了角色的动作描述,使得对话更加生动和真实,将为用户提供更丰富的角色扮演体验。 - **通用角色扮演数据生成框架**:我们提供了一个通用的角色扮演数据生成框架,充分释放OpenAI API的角色扮演能力。该框架生成的数据将用于微调和RAG。目前,该框架代码正在进行测试和优化,预计将在不久的将来公开。 ## 联系我们 如有需要或任何疑问请联系:邮箱:[email protected] ## 项目使用与免责声明 本项目遵循Apache 2.0许可协议。在此协议下,您被授权自由使用项目中的代码进行商业活动。然而,若本项目涉及到特定角色的版权问题,或受其他相关协议限制(例如接口使用协议等),您使用时必须严格遵守这些协议的相关条款。 本项目所开源的数据是通过调用OpenAI接口生成的,并未经过严格的事实和安全性验证。因此,在使用这些数据时,请您务必谨慎考虑其真实性、准确性以及安全性。同时,请确保在使用过程中遵守OpenAI的相关规定。 此外,我们声明,本数据集不代表开发者或任何其他方的立场、利益或观点,也不代表任何团体的主张。本项目的开发者不对使用本数据集可能引起的任何形式的损害或纠纷承担责任。
bai-roleplay/evol-character-entire
[ "task_categories:text-generation", "language:zh", "license:apache-2.0", "region:us" ]
2024-01-29T19:22:44+00:00
{"language": ["zh"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "Role-playing Dataset", "configs": [{"config_name": "evol-character-gpt3.5", "data_files": "evol-character-gpt3.5.json"}, {"config_name": "evol-character-male-gpt3.5", "data_files": "evol-character-male-gpt3.5.json"}, {"config_name": "evol-character-gpt4", "data_files": "evol-character-gpt4.json"}]}
2024-02-01T09:23:28+00:00
[]
[ "zh" ]
TAGS #task_categories-text-generation #language-Chinese #license-apache-2.0 #region-us
--- 为确保数据的合理使用,若要获得全部数据,请通过如上递交您的信息。 ================================ 若希望下载已公开的数据,请访问bai-roleplay/evol-character-200 ============================================== --- Evol-character 数据集 ================== 中文 English * Evol-character 数据集 + 下载数据集 + 数据生成框架 + 数据结构 + 与现有数据集对比 - 现有角色扮演数据集 - 我们的优势 + 联系我们 + 项目使用与免责声明 下载数据集 ----- 本数据集由GPT3.5和GPT4生成,为确保数据的合理使用,目前只公开了部分数据,公开的数据由三份文件组成,每份文件包含200个角色的设定以及对话。可在huggingface中下载已公开数据或申请获取全部数据: ![English](URL 可在github中获取数据生成代码的相关信息:![English](URL OpenAI GPT3.5 数据生成样例: OpenAI GPT4 数据生成样例: 数据生成框架 ------ 本数据集使用OpenAI API接口基于如下框架生成,目前框架正在测试和进一步优化,未来将适时公布,敬请期待! !Local Image * 种子特征集和基础设定: + 手工编写的种子集包含基本角色特征。 + LLM从这个种子集生成角色的基础设定。 * 角色设定的进化: + 第二个种子集包含指导角色设定进化的指令Prompt。 + 这些进化角色的指令Prompt被放到一个指令池中。基于这些进化Prompt,LLM对基础设定实施进化。 * 反馈循环: + 由人类评估者和GPT-4组成的混合评价系统。此系统对进化后的设定给出反馈。 + 反馈用于迭代更新种子集。如此迭代,我们最终得到一个细致的角色设定数据集。 * 角色扮演和对话生成: + 使用self-instruction框架基于角色设定生成角色的对话数据。 数据结构 ---- 我们有三个数据集: * evol-character-gpt3.5.json * evol-character-male-gpt3.5.json * URL 我们在首先生成了数据'evol-character-gpt3.5.json',该数据中的角色大多为女性角色,因此,我们补充生成了男性角色数据'evol-character-male-gpt3.5.json'。 细节如下: 1. 'evol-character-gpt3.5.json': 这个数据集包括200个不同的角色。每个角色的数据分为两部分:instruction和dialog。Instruction部分描述了角色的性格、经历等特征,而dialog部分则包含了10组对话(但有些角色可能因后期处理而少于10组)。每个角色的数据结构示例如下: 2. 'evol-character-male-gpt3.5.json': 也包含200个角色,其数据结构与evol-character-gpt3.5.json相同。 3. 'URL': 同样含有200个角色,相比于gpt3.5 version数据更加详细和精细。每个角色的数据分为setting和iqa两部分。Setting部分详细描述了角色的性格、经历等特点,而iqa部分则包含了与该角色对话的人物的性格设定,以及他们之间的多轮对话。每个角色的数据中涵盖了三个相关人物及其与该角色之间的对话。每个角色的数据结构示例如下: 与现有数据集对比 -------- ### 现有角色扮演数据集 ### 我们的优势 * 精细化的角色设定数据:我们的数据集弥补了现有开源Role-playing Instruction数据中常见的角色设定不足问题。我们从角色身份、语言风格、背景故事等多个维度提供了详尽的信息。特别是在GPT-4版本中,我们还增加了对话者身份的设定,使数据更为完整和丰富。 * 多样性的角色性格:本数据集涵盖尽可能广泛的二次元角色性格,保证了低重复性和高丰富度。 * 生动的语言和动作描述:我们的数据集不仅包含角色间的对话,还添加了角色的动作描述,使得对话更加生动和真实,将为用户提供更丰富的角色扮演体验。 * 通用角色扮演数据生成框架:我们提供了一个通用的角色扮演数据生成框架,充分释放OpenAI API的角色扮演能力。该框架生成的数据将用于微调和RAG。目前,该框架代码正在进行测试和优化,预计将在不久的将来公开。 联系我们 ---- 如有需要或任何疑问请联系:邮箱:info@URL 项目使用与免责声明 --------- 本项目遵循Apache 2.0许可协议。在此协议下,您被授权自由使用项目中的代码进行商业活动。然而,若本项目涉及到特定角色的版权问题,或受其他相关协议限制(例如接口使用协议等),您使用时必须严格遵守这些协议的相关条款。 本项目所开源的数据是通过调用OpenAI接口生成的,并未经过严格的事实和安全性验证。因此,在使用这些数据时,请您务必谨慎考虑其真实性、准确性以及安全性。同时,请确保在使用过程中遵守OpenAI的相关规定。 此外,我们声明,本数据集不代表开发者或任何其他方的立场、利益或观点,也不代表任何团体的主张。本项目的开发者不对使用本数据集可能引起的任何形式的损害或纠纷承担责任。
[ "### 现有角色扮演数据集", "### 我们的优势\n\n\n* 精细化的角色设定数据:我们的数据集弥补了现有开源Role-playing Instruction数据中常见的角色设定不足问题。我们从角色身份、语言风格、背景故事等多个维度提供了详尽的信息。特别是在GPT-4版本中,我们还增加了对话者身份的设定,使数据更为完整和丰富。\n* 多样性的角色性格:本数据集涵盖尽可能广泛的二次元角色性格,保证了低重复性和高丰富度。\n* 生动的语言和动作描述:我们的数据集不仅包含角色间的对话,还添加了角色的动作描述,使得对话更加生动和真实,将为用户提供更丰富的角色扮演体验。\n* 通用角色扮演数据生成框架:我们提供了一个通用的角色扮演数据生成框架,充分释放OpenAI API的角色扮演能力。该框架生成的数据将用于微调和RAG。目前,该框架代码正在进行测试和优化,预计将在不久的将来公开。\n\n\n联系我们\n----\n\n\n如有需要或任何疑问请联系:邮箱:info@URL\n\n\n项目使用与免责声明\n---------\n\n\n本项目遵循Apache 2.0许可协议。在此协议下,您被授权自由使用项目中的代码进行商业活动。然而,若本项目涉及到特定角色的版权问题,或受其他相关协议限制(例如接口使用协议等),您使用时必须严格遵守这些协议的相关条款。\n\n\n本项目所开源的数据是通过调用OpenAI接口生成的,并未经过严格的事实和安全性验证。因此,在使用这些数据时,请您务必谨慎考虑其真实性、准确性以及安全性。同时,请确保在使用过程中遵守OpenAI的相关规定。\n\n\n此外,我们声明,本数据集不代表开发者或任何其他方的立场、利益或观点,也不代表任何团体的主张。本项目的开发者不对使用本数据集可能引起的任何形式的损害或纠纷承担责任。" ]
[ "TAGS\n#task_categories-text-generation #language-Chinese #license-apache-2.0 #region-us \n", "### 现有角色扮演数据集", "### 我们的优势\n\n\n* 精细化的角色设定数据:我们的数据集弥补了现有开源Role-playing Instruction数据中常见的角色设定不足问题。我们从角色身份、语言风格、背景故事等多个维度提供了详尽的信息。特别是在GPT-4版本中,我们还增加了对话者身份的设定,使数据更为完整和丰富。\n* 多样性的角色性格:本数据集涵盖尽可能广泛的二次元角色性格,保证了低重复性和高丰富度。\n* 生动的语言和动作描述:我们的数据集不仅包含角色间的对话,还添加了角色的动作描述,使得对话更加生动和真实,将为用户提供更丰富的角色扮演体验。\n* 通用角色扮演数据生成框架:我们提供了一个通用的角色扮演数据生成框架,充分释放OpenAI API的角色扮演能力。该框架生成的数据将用于微调和RAG。目前,该框架代码正在进行测试和优化,预计将在不久的将来公开。\n\n\n联系我们\n----\n\n\n如有需要或任何疑问请联系:邮箱:info@URL\n\n\n项目使用与免责声明\n---------\n\n\n本项目遵循Apache 2.0许可协议。在此协议下,您被授权自由使用项目中的代码进行商业活动。然而,若本项目涉及到特定角色的版权问题,或受其他相关协议限制(例如接口使用协议等),您使用时必须严格遵守这些协议的相关条款。\n\n\n本项目所开源的数据是通过调用OpenAI接口生成的,并未经过严格的事实和安全性验证。因此,在使用这些数据时,请您务必谨慎考虑其真实性、准确性以及安全性。同时,请确保在使用过程中遵守OpenAI的相关规定。\n\n\n此外,我们声明,本数据集不代表开发者或任何其他方的立场、利益或观点,也不代表任何团体的主张。本项目的开发者不对使用本数据集可能引起的任何形式的损害或纠纷承担责任。" ]
02ca50e0e06b49530719ea8cdee9c6e2588668dc
Datasets used in the paper "World Models via Policy-Guided Trajectory Diffusion" by Marc Rigter, Jun Yamada, and Ingmar Posner. For the accompanying code, please see the Github repository [polygrad-world-models](https://github.com/marc-rigter/polygrad-world-models).
marcrigter/polygrad-wm-datasets
[ "license:mit", "region:us" ]
2024-01-29T19:52:50+00:00
{"license": "mit"}
2024-01-29T21:19:29+00:00
[]
[]
TAGS #license-mit #region-us
Datasets used in the paper "World Models via Policy-Guided Trajectory Diffusion" by Marc Rigter, Jun Yamada, and Ingmar Posner. For the accompanying code, please see the Github repository polygrad-world-models.
[]
[ "TAGS\n#license-mit #region-us \n" ]
7a70d41fb50f86bc07ab59bc68bbcfbde652d12f
# Dataset Card for Kazanexpress products ### Dataset Summary This dataset was scraped from product pages on the Russian marketplace [Kazanexpress](https://kazanexpress.ru). It includes all information from the product card and metadata from the API. The dataset was collected by processing around 3 million products, starting from the first one. At the time the dataset was collected, it is assumed that these were all the products available on this marketplace. Please note that the data returned by the API was not processed. This means that there is a large amount of extraneous data in the dataset, such as links to images in descriptions and parts of HTML. Additionally, some fields may be empty, but the string is expected to contain some data, empty responses have been sorted. ### Languages The dataset is mostly in Russian, but there may be other languages present. ## Dataset Structure ### Data Fields This dataset includes the following fields: - `id`: Identifier for the product (integer) - `title`: Title of the product (string) - `category`: Category of the product (string) - `parent_category`: Parent category of the product (string) - `grandparent_category`: Grandparent category of the product (string) - `greatgrandparent_category`: Great-grandparent category of the product (string) - `rating`: Rating of the product (float) - `description`: Description of the product (string) - `comments`: Comments about the product (array) - `attributes`: Attributes of the product (array) - `seller`: Name of the seller (string) - `seller_description`: Description provided by the seller (string) - `adultCategory`: Whether the product is for adults (boolean) ### Data Splits All examples are in the train split, there is no validation split. ## Additional Information ### License This dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can: * Use it for any purpose, including commercial projects. * Modify it however you like. * Distribute it without asking permission. No attribution is required, but it's always appreciated! CC0 license: https://creativecommons.org/publicdomain/zero/1.0/deed.en To learn more about CC0, visit the Creative Commons website: https://creativecommons.org/publicdomain/zero/1.0/ ### Dataset Curators - [nyuuzyou](https://ducks.party)
nyuuzyou/ke-products
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ru", "license:cc0-1.0", "region:us" ]
2024-01-29T20:01:10+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Kazanexpress products"}
2024-01-29T20:02:36+00:00
[]
[ "ru" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Russian #license-cc0-1.0 #region-us
# Dataset Card for Kazanexpress products ### Dataset Summary This dataset was scraped from product pages on the Russian marketplace Kazanexpress. It includes all information from the product card and metadata from the API. The dataset was collected by processing around 3 million products, starting from the first one. At the time the dataset was collected, it is assumed that these were all the products available on this marketplace. Please note that the data returned by the API was not processed. This means that there is a large amount of extraneous data in the dataset, such as links to images in descriptions and parts of HTML. Additionally, some fields may be empty, but the string is expected to contain some data, empty responses have been sorted. ### Languages The dataset is mostly in Russian, but there may be other languages present. ## Dataset Structure ### Data Fields This dataset includes the following fields: - 'id': Identifier for the product (integer) - 'title': Title of the product (string) - 'category': Category of the product (string) - 'parent_category': Parent category of the product (string) - 'grandparent_category': Grandparent category of the product (string) - 'greatgrandparent_category': Great-grandparent category of the product (string) - 'rating': Rating of the product (float) - 'description': Description of the product (string) - 'comments': Comments about the product (array) - 'attributes': Attributes of the product (array) - 'seller': Name of the seller (string) - 'seller_description': Description provided by the seller (string) - 'adultCategory': Whether the product is for adults (boolean) ### Data Splits All examples are in the train split, there is no validation split. ## Additional Information ### License This dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can: * Use it for any purpose, including commercial projects. * Modify it however you like. * Distribute it without asking permission. No attribution is required, but it's always appreciated! CC0 license: URL To learn more about CC0, visit the Creative Commons website: URL ### Dataset Curators - nyuuzyou
[ "# Dataset Card for Kazanexpress products", "### Dataset Summary\n\nThis dataset was scraped from product pages on the Russian marketplace Kazanexpress. It includes all information from the product card and metadata from the API. The dataset was collected by processing around 3 million products, starting from the first one. At the time the dataset was collected, it is assumed that these were all the products available on this marketplace. Please note that the data returned by the API was not processed. This means that there is a large amount of extraneous data in the dataset, such as links to images in descriptions and parts of HTML. Additionally, some fields may be empty, but the string is expected to contain some data, empty responses have been sorted.", "### Languages\n\nThe dataset is mostly in Russian, but there may be other languages present.", "## Dataset Structure", "### Data Fields\n\nThis dataset includes the following fields:\n\n- 'id': Identifier for the product (integer)\n- 'title': Title of the product (string)\n- 'category': Category of the product (string)\n- 'parent_category': Parent category of the product (string)\n- 'grandparent_category': Grandparent category of the product (string)\n- 'greatgrandparent_category': Great-grandparent category of the product (string)\n- 'rating': Rating of the product (float)\n- 'description': Description of the product (string)\n- 'comments': Comments about the product (array)\n- 'attributes': Attributes of the product (array)\n- 'seller': Name of the seller (string)\n- 'seller_description': Description provided by the seller (string)\n- 'adultCategory': Whether the product is for adults (boolean)", "### Data Splits\n\nAll examples are in the train split, there is no validation split.", "## Additional Information", "### License\n\nThis dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:\n\n* Use it for any purpose, including commercial projects.\n* Modify it however you like.\n* Distribute it without asking permission.\n\nNo attribution is required, but it's always appreciated!\n\nCC0 license: URL\n\nTo learn more about CC0, visit the Creative Commons website: URL", "### Dataset Curators\n\n- nyuuzyou" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Russian #license-cc0-1.0 #region-us \n", "# Dataset Card for Kazanexpress products", "### Dataset Summary\n\nThis dataset was scraped from product pages on the Russian marketplace Kazanexpress. It includes all information from the product card and metadata from the API. The dataset was collected by processing around 3 million products, starting from the first one. At the time the dataset was collected, it is assumed that these were all the products available on this marketplace. Please note that the data returned by the API was not processed. This means that there is a large amount of extraneous data in the dataset, such as links to images in descriptions and parts of HTML. Additionally, some fields may be empty, but the string is expected to contain some data, empty responses have been sorted.", "### Languages\n\nThe dataset is mostly in Russian, but there may be other languages present.", "## Dataset Structure", "### Data Fields\n\nThis dataset includes the following fields:\n\n- 'id': Identifier for the product (integer)\n- 'title': Title of the product (string)\n- 'category': Category of the product (string)\n- 'parent_category': Parent category of the product (string)\n- 'grandparent_category': Grandparent category of the product (string)\n- 'greatgrandparent_category': Great-grandparent category of the product (string)\n- 'rating': Rating of the product (float)\n- 'description': Description of the product (string)\n- 'comments': Comments about the product (array)\n- 'attributes': Attributes of the product (array)\n- 'seller': Name of the seller (string)\n- 'seller_description': Description provided by the seller (string)\n- 'adultCategory': Whether the product is for adults (boolean)", "### Data Splits\n\nAll examples are in the train split, there is no validation split.", "## Additional Information", "### License\n\nThis dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:\n\n* Use it for any purpose, including commercial projects.\n* Modify it however you like.\n* Distribute it without asking permission.\n\nNo attribution is required, but it's always appreciated!\n\nCC0 license: URL\n\nTo learn more about CC0, visit the Creative Commons website: URL", "### Dataset Curators\n\n- nyuuzyou" ]
a275ac4daa1738f5ef4cc95a17b8edb99f37d869
# Dataset Card for "ultrachat_sft_instruction_format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jtatman/ultrachat_sft_instruction_format
[ "region:us" ]
2024-01-29T20:03:50+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2605877996, "num_examples": 657794}], "download_size": 1259509466, "dataset_size": 2605877996}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-29T20:13:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrachat_sft_instruction_format" More Information needed
[ "# Dataset Card for \"ultrachat_sft_instruction_format\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrachat_sft_instruction_format\"\n\nMore Information needed" ]
5477773674c591dd422a95fd3793791b1a4a235d
# Dataset Card for "ultrachat_gen_instruction_format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jtatman/ultrachat_gen_instruction_format
[ "region:us" ]
2024-01-29T20:29:25+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2185075786, "num_examples": 551972}], "download_size": 1109528314, "dataset_size": 2185075786}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-29T20:39:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrachat_gen_instruction_format" More Information needed
[ "# Dataset Card for \"ultrachat_gen_instruction_format\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrachat_gen_instruction_format\"\n\nMore Information needed" ]
9c384e32fe8d67cf11260eb4d3b10965e811d4fb
# Dataset Card for "ultrafeedback_binarized_narrow_unlabeled" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Asap7772/ultrafeedback_binarized_narrow_unlabeled
[ "region:us" ]
2024-01-29T21:24:03+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "reward_chosen", "dtype": "int64"}, {"name": "reward_rejected", "dtype": "int64"}, {"name": "score_chosen", "dtype": "int64"}, {"name": "score_rejected", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184309550, "num_examples": 60672}], "download_size": 108000936, "dataset_size": 184309550}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-29T21:24:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback_binarized_narrow_unlabeled" More Information needed
[ "# Dataset Card for \"ultrafeedback_binarized_narrow_unlabeled\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback_binarized_narrow_unlabeled\"\n\nMore Information needed" ]
36e6aef85a785b7d22cbe9f127e4a2e1ca87e2e5
# Dataset Card for Sypder-Syn [Spyder-Syn](https://github.com/ygan/Spider-Syn) is a human curated variant of the [Spider](https://yale-lily.github.io/spider) Text-to-SQL database. The database was created to test the robustness of text-to-SQL models for robustness of synonym substitution. The source GIT repo for Sypder-Syn is located here: https://github.com/ygan/Spider-Syn Details regarding the data perterbation methods used and objectives are described in ACL 2021: [arXiv](https://arxiv.org/abs/2106.01065) ## Paper Abstract > Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render the models vulnerable to attacks that break the schema linking mechanism. In this work, we investigate the robustness of text-to-SQL models to synonym substitution. In particular, we introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn are modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. We observe that the accuracy dramatically drops by eliminating such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. Finally, we present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, while the second category is based on adversarial training. We demonstrate that both categories of approaches significantly outperform their counterparts without the defense, and the first category of approaches are more effective. ## Citation Information ``` @inproceedings{gan-etal-2021-towards, title = "Towards Robustness of Text-to-{SQL} Models against Synonym Substitution", author = "Gan, Yujian and Chen, Xinyun and Huang, Qiuping and Purver, Matthew and Woodward, John R. and Xie, Jinxia and Huang, Pengsheng", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.195", doi = "10.18653/v1/2021.acl-long.195", pages = "2505--2515", } ```
aherntech/spider-syn
[ "task_categories:text2text-generation", "size_categories:1K<n<10K", "language:en", "license:mit", "text-to-sql", "arxiv:2106.01065", "region:us" ]
2024-01-29T21:28:24+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text2text-generation"], "pretty_name": "Spider-Syn", "tags": ["text-to-sql"]}
2024-01-29T22:52:05+00:00
[ "2106.01065" ]
[ "en" ]
TAGS #task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #text-to-sql #arxiv-2106.01065 #region-us
# Dataset Card for Sypder-Syn Spyder-Syn is a human curated variant of the Spider Text-to-SQL database. The database was created to test the robustness of text-to-SQL models for robustness of synonym substitution. The source GIT repo for Sypder-Syn is located here: URL Details regarding the data perterbation methods used and objectives are described in ACL 2021: arXiv ## Paper Abstract > Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render the models vulnerable to attacks that break the schema linking mechanism. In this work, we investigate the robustness of text-to-SQL models to synonym substitution. In particular, we introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn are modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. We observe that the accuracy dramatically drops by eliminating such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. Finally, we present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, while the second category is based on adversarial training. We demonstrate that both categories of approaches significantly outperform their counterparts without the defense, and the first category of approaches are more effective.
[ "# Dataset Card for Sypder-Syn\n\nSpyder-Syn is a human curated variant of the Spider Text-to-SQL database.\nThe database was created to test the robustness of text-to-SQL models for robustness of synonym substitution.\n\nThe source GIT repo for Sypder-Syn is located here: URL\n\nDetails regarding the data perterbation methods used and objectives are described in ACL 2021: arXiv", "## Paper Abstract\n\n> Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render the models vulnerable to attacks that break the schema linking mechanism. In this work, we investigate the robustness of text-to-SQL models to synonym substitution. In particular, we introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn are modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. We observe that the accuracy dramatically drops by eliminating such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. Finally, we present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, while the second category is based on adversarial training. We demonstrate that both categories of approaches significantly outperform their counterparts without the defense, and the first category of approaches are more effective." ]
[ "TAGS\n#task_categories-text2text-generation #size_categories-1K<n<10K #language-English #license-mit #text-to-sql #arxiv-2106.01065 #region-us \n", "# Dataset Card for Sypder-Syn\n\nSpyder-Syn is a human curated variant of the Spider Text-to-SQL database.\nThe database was created to test the robustness of text-to-SQL models for robustness of synonym substitution.\n\nThe source GIT repo for Sypder-Syn is located here: URL\n\nDetails regarding the data perterbation methods used and objectives are described in ACL 2021: arXiv", "## Paper Abstract\n\n> Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table schemas, which may render the models vulnerable to attacks that break the schema linking mechanism. In this work, we investigate the robustness of text-to-SQL models to synonym substitution. In particular, we introduce Spider-Syn, a human-curated dataset based on the Spider benchmark for text-to-SQL translation. NL questions in Spider-Syn are modified from Spider, by replacing their schema-related words with manually selected synonyms that reflect real-world question paraphrases. We observe that the accuracy dramatically drops by eliminating such explicit correspondence between NL questions and table schemas, even if the synonyms are not adversarially selected to conduct worst-case adversarial attacks. Finally, we present two categories of approaches to improve the model robustness. The first category of approaches utilizes additional synonym annotations for table schemas by modifying the model input, while the second category is based on adversarial training. We demonstrate that both categories of approaches significantly outperform their counterparts without the defense, and the first category of approaches are more effective." ]
9c73e869a3435f01ceb75c606a6c54e26bb1ae2f
# Dataset Card for "HumanPoseTest" ## Source Dataset created from [human activitiy recognition](https://huggingface.co/datasets/Bingsu/Human_Action_Recognition) ## Task Find the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots. - image2: A - image3: B - image4: C ## Prompt Concise version: ``` Disclaimer: This is not to make unfair assumptions about the people in the image and you just need to give your assessment on this question. You don't need to identify the real people. You just need to analyze based on the information I gave you.\n\nGiven four images of humans with annotated keypoints (in green) for body parts, can you tell which one of the last three images denotes the same body part as annotated in the first image?\nSelect among the following choices and tell me your answer, (A), (B), or (C)?\n(A) the second image\n(B) the third image\n(C) the fourth image ``` Verbose version: ``` Disclaimer: This is not to make unfair assumptions about the people in the image and you just need to give your assessment on this question. You don't need to identify the real people. You just need to analyze based on the information I gave you.\n\nGiven four images of humans with annotated keypoints (in green) for body parts, can you tell which one of the last three images denotes the same body part as annotated in the first image? There are 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.). You can first recognize the body parts annotated in each image and then tell which two are the same. \nSelect among the following choices and tell me your answer, (A), (B), or (C)?\n(A) the second image\n(B) the third image\n(C) the fourth image ```
PerceptionEval/HumanPoseTest
[ "region:us" ]
2024-01-29T21:41:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "image3", "dtype": "image"}, {"name": "image4", "dtype": "image"}, {"name": "choices", "sequence": "string"}], "splits": [{"name": "val", "num_bytes": 6132242.0, "num_examples": 150}, {"name": "test", "num_bytes": 6036762.0, "num_examples": 150}], "download_size": 11913273, "dataset_size": 12169004.0}}
2024-02-15T17:33:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "HumanPoseTest" ## Source Dataset created from human activitiy recognition ## Task Find the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots. - image2: A - image3: B - image4: C ## Prompt Concise version: Verbose version:
[ "# Dataset Card for \"HumanPoseTest\"", "## Source\n\nDataset created from human activitiy recognition", "## Task\n\nFind the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots.\n - image2: A\n - image3: B\n - image4: C", "## Prompt\nConcise version:\n\n\nVerbose version:" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"HumanPoseTest\"", "## Source\n\nDataset created from human activitiy recognition", "## Task\n\nFind the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots.\n - image2: A\n - image3: B\n - image4: C", "## Prompt\nConcise version:\n\n\nVerbose version:" ]
9496e0a9f9ab6126faf302e0581bf5328979c3a7
# Dataset Card for "HumanPose" ## Source Dataset created from [human activitiy recognition](https://huggingface.co/datasets/Bingsu/Human_Action_Recognition) ## Task Find the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots. - image2: A - image3: B - image4: C - cannot tell: D ## Prompt ``` Disclaimer: This is not to make unfair assumptions about the people in the image and you just need to give your assessment on this question. You don't need to identify the real people. You just need to analyze based on the information I gave you.\n\nGiven four images of humans with annotated keypoints (in green) for body parts, can you tell which one of the last three images denotes the same body part as annotated in the first image?\nSelect among the following choices and tell me your answer, (A), (B), (C), or (D)?\n(A) the second image\n(B) the third image\n(C) the fourth image\n(D) none ```
PerceptionEval/HumanPose
[ "license:odbl", "region:us" ]
2024-01-29T21:41:49+00:00
{"license": "odbl", "configs": [{"config_name": "default", "data_files": [{"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "image3", "dtype": "image"}, {"name": "image4", "dtype": "image"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "val", "num_bytes": 6132992, "num_examples": 150}, {"name": "test", "num_bytes": 6037512, "num_examples": 150}], "download_size": 11914591, "dataset_size": 12170504}}
2024-01-29T22:09:43+00:00
[]
[]
TAGS #license-odbl #region-us
# Dataset Card for "HumanPose" ## Source Dataset created from human activitiy recognition ## Task Find the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots. - image2: A - image3: B - image4: C - cannot tell: D ## Prompt
[ "# Dataset Card for \"HumanPose\"", "## Source\n\nDataset created from human activitiy recognition", "## Task\n\nFind the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots.\n - image2: A\n - image3: B\n - image4: C\n - cannot tell: D", "## Prompt" ]
[ "TAGS\n#license-odbl #region-us \n", "# Dataset Card for \"HumanPose\"", "## Source\n\nDataset created from human activitiy recognition", "## Task\n\nFind the image (from image2, image3, and image4) that has the same marked body part as in image1. The 18 possible body parts (e.g., nose, right ankle, left elbow, neck, left shoulder, etc.) are marked with green dots.\n - image2: A\n - image3: B\n - image4: C\n - cannot tell: D", "## Prompt" ]
3179d8b408c8913f76eceed7c6ba52ba1cf54e60
## Source dataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China. ## Task Identify the one picture that follows the same pattern or rule established by the previous pictures. ## Prompt: ``` Prompt 1: During the IQ test, you'll be presented with four picture options. Your task is to identify the one picture that follows the same pattern or rule established by the previous pictures. Here are some strategies to help you determine the right choice: 1. Look for patterns related to quantity or numbers that progress from one picture to the next. 2. Check if there is a consistent way the images are rotated or flipped in sequence. 3. Identify a common feature that each successive picture shares with the previous one, while also paying attention to any variations that might indicate a regular progression or change. Select between the following choices and tell me your answer, (A), (B), (C), or (D)? (A) ... (B) ... (C) ... (D) ... Prompt 2: During the IQ test, you'll be presented with four picture options. Your task involves spatial reasoning: the outer surface of the carton is displayed on the left, and you need to determine which of the following options can be folded into it. Select between the following choices and tell me your answer, (A), (B), (C), or (D)? (A) ... (B) ... (C) ... (D) ... ```
PerceptionEval/IQTestTest
[ "region:us" ]
2024-01-29T22:51:30+00:00
{"dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image", "dtype": "image"}, {"name": "choices", "sequence": "string"}], "splits": [{"name": "val", "num_bytes": 5803996.0, "num_examples": 150}, {"name": "test", "num_bytes": 5840032.0, "num_examples": 150}], "download_size": 10977650, "dataset_size": 11644028.0}, "configs": [{"config_name": "default", "data_files": [{"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-02-01T21:22:35+00:00
[]
[]
TAGS #region-us
## Source dataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China. ## Task Identify the one picture that follows the same pattern or rule established by the previous pictures. ## Prompt:
[ "## Source\ndataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China.", "## Task\nIdentify the one picture that follows the same pattern or rule established by the previous pictures.", "## Prompt:" ]
[ "TAGS\n#region-us \n", "## Source\ndataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China.", "## Task\nIdentify the one picture that follows the same pattern or rule established by the previous pictures.", "## Prompt:" ]
6f955fb96bbf15ea6dad6c9d75c1a386ca1301de
## Source dataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China. ## Task Identify the one picture that follows the same pattern or rule established by the previous pictures. ## Prompt: ``` Prompt 1: During the IQ test, you'll be presented with four picture options. Your task is to identify the one picture that follows the same pattern or rule established by the previous pictures. Here are some strategies to help you determine the right choice: 1. Look for patterns related to quantity or numbers that progress from one picture to the next. 2. Check if there is a consistent way the images are rotated or flipped in sequence. 3. Identify a common feature that each successive picture shares with the previous one, while also paying attention to any variations that might indicate a regular progression or change. Select between the following choices and tell me your answer, (A), (B), (C), or (D)? (A) ... (B) ... (C) ... (D) ... Prompt 2: During the IQ test, you'll be presented with four picture options. Your task involves spatial reasoning: the outer surface of the carton is displayed on the left, and you need to determine which of the following options can be folded into it. Select between the following choices and tell me your answer, (A), (B), (C), or (D)? (A) ... (B) ... (C) ... (D) ... ```
PerceptionEval/IQTest
[ "region:us" ]
2024-01-29T22:51:43+00:00
{"dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image", "dtype": "image"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "string"}, {"name": "explanation", "dtype": "string"}], "splits": [{"name": "val", "num_bytes": 5837945.0, "num_examples": 150}, {"name": "test", "num_bytes": 5870736.0, "num_examples": 150}], "download_size": 11012105, "dataset_size": 11708681.0}, "configs": [{"config_name": "default", "data_files": [{"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-02-01T21:23:07+00:00
[]
[]
TAGS #region-us
## Source dataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China. ## Task Identify the one picture that follows the same pattern or rule established by the previous pictures. ## Prompt:
[ "## Source\ndataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China.", "## Task\nIdentify the one picture that follows the same pattern or rule established by the previous pictures.", "## Prompt:" ]
[ "TAGS\n#region-us \n", "## Source\ndataset created from (Practice) Graphical Reasoning Questions for Civil Service Exam in China.", "## Task\nIdentify the one picture that follows the same pattern or rule established by the previous pictures.", "## Prompt:" ]
4efac62f2e2dff6a9b86ef6ff5f6403e0ae013d3
# WhisperKit Evaluation Results ## Dataset: `librispeech` ### WhisperKit + `openai_whisper-large-v3` (+optimized variants) | | WER | QoI (%) | File Size (MB) | |:----------------------------------------------------------------------------------------------------------------------------------------------|------:|----------:|-----------------:| | [openai_whisper-large-v3](https://huggingface.co/argmaxinc/whisperkit-coreml-rc1/tree/main/openai_whisper-large-v3) | 2.44 | 100 | 3100 | | [openai_whisper-large-v3_turbo](https://huggingface.co/argmaxinc/whisperkit-coreml-rc1/tree/main/openai_whisper-large-v3_turbo) | 2.41 | 99.8 | 3100 | | [openai_whisper-large-v3_turbo_1307MB](https://huggingface.co/argmaxinc/whisperkit-coreml-rc1/tree/main/openai_whisper-large-v3_turbo_1307MB) | 2.6 | 97.7 | 1307 | | [openai_whisper-large-v3_turbo_1049MB](https://huggingface.co/argmaxinc/whisperkit-coreml-rc1/tree/main/openai_whisper-large-v3_turbo_1049MB) | 4.81 | 91 | 1049 | | [openai_whisper-large-v3_1053MB](https://huggingface.co/argmaxinc/whisperkit-coreml-rc1/tree/main/openai_whisper-large-v3_1053MB) | 4.65 | 90.8 | 1053 | ### Different Projects + `openai_whisper-large-v3` | | WER | Commit Hash | Model Format | |:-------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------|:--------------|:---------------| | [WhisperKit](https://github.com/argmaxinc/whisperkit) | [2.44](https://hf.co/datasets/argmaxinc/whisperkit-evals-rc1/tree/main/WhisperKit/openai_whisper-large-v3/librispeech) | 0f8b4fe | Core ML | | [WhisperCpp](https://github.com/ggerganov/whisper.cpp) | [2.36](https://hf.co/datasets/argmaxinc/whisperkit-evals-rc1/tree/main/whisper.cpp/openai_whisper-large-v3/librispeech) | e72e415 | Core ML + GGUF | | [WhisperMLX](https://github.com/ml-explore/mlx-examples/blob/main/whisper/whisper/transcribe.py) | [2.69](https://hf.co/datasets/argmaxinc/whisperkit-evals-rc1/tree/main/WhisperMLX/openai_whisper-large-v3/librispeech) | 614de66 | MLX (Numpy) | ### Quality-of-Inference (QoI) Certification We believe that rigorously measuring the quality of inference is necessary for developers and enterprises to make informed decisions when opting to use optimized or compressed variants of Whisper models in production. The current measurements are between reference and optimized WhisperKit models. We are going to extend the scope of this measurement to other Whisper implementations soon so developers can certify the behavior change (if any) caused by alternating use of WhisperKit with (or migration from) these implementations. In all measurements, we care primarily about per-example no-regressions (quantified as `qoi` below) which is a stricter metric compared to dataset average WER. A 100% `qoi` preserves perfect backwards-compatibility on the test distribution and avoids "perceived regressions", the phenomenon where per-example known behavior changes after a code/model update and causes divergence in downstream code or breaks the user experience itself (even if dataset averages might stay flat across updates). Pseudocode for `qoi`: ```python qoi = [] for example in dataset: no_regression = wer(optimized_model(example)) <= wer(reference_model(example)) qoi.append(no_regression) qoi = (sum(qoi) / len(qoi)) * 100. ``` We define the reference model as the default float16 precision Core ML model that is generated by whisperkittools. This reference model matches the accuracy of the original PyTorch model on the specified test sets. We use `librispeech/test.clean` (5 hours of short English audio clips) as our testing set for Whisper. We are actively expanding our test set coverage to `earnings22` (120 hours of long English audio clips with various accents). We anticipate developers that use Whisper in production to have their own Quality Assurance test sets and whisperkittools offers the tooling necessary to run the same measurements on such custom test sets, please see the [Model Evaluation on Custom Dataset](#evaluate-on-custom-dataset) for details. ### Reproducing Results Results in this page are generated by our cluster of Apple Silicon Macs. We use them as self-hosted runners on Github Actions as our CI infrastructure. Due to [security concerns](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#hardening-for-self-hosted-runners), we are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to run identical [evaluation jobs](#evaluation) locally. For reference, our M2 Ultra devices complete a `librispeech` + `openai/whisper-large-v3` evaluation in under 1 hour regardless of the Whisper implementation. Older Apple Silicon Macs should take less than 1 day to complete the same evaluation. Glossary: - `_turbo`: Indicates the presence of additional optimizations (not compression) to unlock streaming transcription as described in our [Blog Post](https://www.takeargmax.com/blog/whisperkit). - `_*MB`: Indicates the presence of mixed-bit quantization. Instead of cluttering the filename with details like `_AudioEncoder-5.8bits_TextDecoder-6.1bits`, we choose to summarize the compression spec as the resulting total file size since this is what matters to developers in production.
argmaxinc/whisperkit-evals
[ "whisper", "whisperkit", "coreml", "asr", "quantized", "region:us" ]
2024-01-29T23:17:31+00:00
{"pretty_name": "WhisperKit ASR Evaluation Results", "tags": ["whisper", "whisperkit", "coreml", "asr", "quantized"]}
2024-02-17T09:31:31+00:00
[]
[]
TAGS #whisper #whisperkit #coreml #asr #quantized #region-us
WhisperKit Evaluation Results ============================= Dataset: 'librispeech' ---------------------- ### WhisperKit + 'openai\_whisper-large-v3' (+optimized variants) ### Different Projects + 'openai\_whisper-large-v3' ### Quality-of-Inference (QoI) Certification We believe that rigorously measuring the quality of inference is necessary for developers and enterprises to make informed decisions when opting to use optimized or compressed variants of Whisper models in production. The current measurements are between reference and optimized WhisperKit models. We are going to extend the scope of this measurement to other Whisper implementations soon so developers can certify the behavior change (if any) caused by alternating use of WhisperKit with (or migration from) these implementations. In all measurements, we care primarily about per-example no-regressions (quantified as 'qoi' below) which is a stricter metric compared to dataset average WER. A 100% 'qoi' preserves perfect backwards-compatibility on the test distribution and avoids "perceived regressions", the phenomenon where per-example known behavior changes after a code/model update and causes divergence in downstream code or breaks the user experience itself (even if dataset averages might stay flat across updates). Pseudocode for 'qoi': We define the reference model as the default float16 precision Core ML model that is generated by whisperkittools. This reference model matches the accuracy of the original PyTorch model on the specified test sets. We use 'librispeech/URL' (5 hours of short English audio clips) as our testing set for Whisper. We are actively expanding our test set coverage to 'earnings22' (120 hours of long English audio clips with various accents). We anticipate developers that use Whisper in production to have their own Quality Assurance test sets and whisperkittools offers the tooling necessary to run the same measurements on such custom test sets, please see the Model Evaluation on Custom Dataset for details. ### Reproducing Results Results in this page are generated by our cluster of Apple Silicon Macs. We use them as self-hosted runners on Github Actions as our CI infrastructure. Due to security concerns, we are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to run identical evaluation jobs locally. For reference, our M2 Ultra devices complete a 'librispeech' + 'openai/whisper-large-v3' evaluation in under 1 hour regardless of the Whisper implementation. Older Apple Silicon Macs should take less than 1 day to complete the same evaluation. Glossary: * '\_turbo': Indicates the presence of additional optimizations (not compression) to unlock streaming transcription as described in our Blog Post. * '\_\*MB': Indicates the presence of mixed-bit quantization. Instead of cluttering the filename with details like '\_AudioEncoder-5.8bits\_TextDecoder-6.1bits', we choose to summarize the compression spec as the resulting total file size since this is what matters to developers in production.
[ "### WhisperKit + 'openai\\_whisper-large-v3' (+optimized variants)", "### Different Projects + 'openai\\_whisper-large-v3'", "### Quality-of-Inference (QoI) Certification\n\n\nWe believe that rigorously measuring the quality of inference is necessary for developers and\nenterprises to make informed decisions when opting to use optimized or compressed variants of\nWhisper models in production. The current measurements are between reference and optimized\nWhisperKit models. We are going to extend the scope of this measurement to other Whisper\nimplementations soon so developers can certify the behavior change (if any) caused by\nalternating use of WhisperKit with (or migration from) these implementations.\n\n\nIn all measurements, we care primarily about per-example no-regressions (quantified as 'qoi' below)\nwhich is a stricter metric compared to dataset average WER. A 100% 'qoi' preserves perfect\nbackwards-compatibility on the test distribution and avoids \"perceived regressions\", the phenomenon\nwhere per-example known behavior changes after a code/model update and causes divergence in\ndownstream code or breaks the user experience itself (even if dataset averages might stay flat\nacross updates). Pseudocode for 'qoi':\n\n\nWe define the reference model as the default float16 precision Core ML model that is generated by\nwhisperkittools. This reference model matches the accuracy of the original PyTorch model\non the specified test sets. We use 'librispeech/URL' (5 hours of short English audio clips)\nas our testing set for Whisper. We are actively expanding our test set coverage to 'earnings22'\n(120 hours of long English audio clips with various accents). We anticipate developers that use Whisper in production to have\ntheir own Quality Assurance test sets and whisperkittools offers the tooling necessary to run the\nsame measurements on such custom test sets, please see the Model Evaluation on Custom Dataset\nfor details.", "### Reproducing Results\n\n\nResults in this page are generated by our cluster of Apple Silicon Macs. We use them as self-hosted runners on\nGithub Actions as our CI infrastructure. Due to security concerns,\nwe are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to\nrun identical evaluation jobs\nlocally. For reference, our M2 Ultra devices complete a 'librispeech' + 'openai/whisper-large-v3'\nevaluation in under 1 hour regardless of the Whisper implementation. Older Apple Silicon Macs should take less than\n1 day to complete the same evaluation.\n\n\nGlossary:\n\n\n* '\\_turbo': Indicates the presence of additional optimizations (not compression) to unlock streaming transcription\nas described in our Blog Post.\n* '\\_\\*MB': Indicates the presence of mixed-bit quantization. Instead of cluttering the filename with details like\n'\\_AudioEncoder-5.8bits\\_TextDecoder-6.1bits', we choose to summarize the compression spec as the resulting total file size since this is what matters to developers in production." ]
[ "TAGS\n#whisper #whisperkit #coreml #asr #quantized #region-us \n", "### WhisperKit + 'openai\\_whisper-large-v3' (+optimized variants)", "### Different Projects + 'openai\\_whisper-large-v3'", "### Quality-of-Inference (QoI) Certification\n\n\nWe believe that rigorously measuring the quality of inference is necessary for developers and\nenterprises to make informed decisions when opting to use optimized or compressed variants of\nWhisper models in production. The current measurements are between reference and optimized\nWhisperKit models. We are going to extend the scope of this measurement to other Whisper\nimplementations soon so developers can certify the behavior change (if any) caused by\nalternating use of WhisperKit with (or migration from) these implementations.\n\n\nIn all measurements, we care primarily about per-example no-regressions (quantified as 'qoi' below)\nwhich is a stricter metric compared to dataset average WER. A 100% 'qoi' preserves perfect\nbackwards-compatibility on the test distribution and avoids \"perceived regressions\", the phenomenon\nwhere per-example known behavior changes after a code/model update and causes divergence in\ndownstream code or breaks the user experience itself (even if dataset averages might stay flat\nacross updates). Pseudocode for 'qoi':\n\n\nWe define the reference model as the default float16 precision Core ML model that is generated by\nwhisperkittools. This reference model matches the accuracy of the original PyTorch model\non the specified test sets. We use 'librispeech/URL' (5 hours of short English audio clips)\nas our testing set for Whisper. We are actively expanding our test set coverage to 'earnings22'\n(120 hours of long English audio clips with various accents). We anticipate developers that use Whisper in production to have\ntheir own Quality Assurance test sets and whisperkittools offers the tooling necessary to run the\nsame measurements on such custom test sets, please see the Model Evaluation on Custom Dataset\nfor details.", "### Reproducing Results\n\n\nResults in this page are generated by our cluster of Apple Silicon Macs. We use them as self-hosted runners on\nGithub Actions as our CI infrastructure. Due to security concerns,\nwe are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to\nrun identical evaluation jobs\nlocally. For reference, our M2 Ultra devices complete a 'librispeech' + 'openai/whisper-large-v3'\nevaluation in under 1 hour regardless of the Whisper implementation. Older Apple Silicon Macs should take less than\n1 day to complete the same evaluation.\n\n\nGlossary:\n\n\n* '\\_turbo': Indicates the presence of additional optimizations (not compression) to unlock streaming transcription\nas described in our Blog Post.\n* '\\_\\*MB': Indicates the presence of mixed-bit quantization. Instead of cluttering the filename with details like\n'\\_AudioEncoder-5.8bits\\_TextDecoder-6.1bits', we choose to summarize the compression spec as the resulting total file size since this is what matters to developers in production." ]
7357480c491e201fc6c179cf6b9db8a5b061d391
# Dataset Card for Evaluation run of Qwen/Qwen2-beta-72B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Qwen/Qwen2-beta-72B](https://huggingface.co/Qwen/Qwen2-beta-72B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Qwen__Qwen2-beta-72B_private", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-29T23:47:04.571636](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen2-beta-72B_private/blob/main/results_2024-01-29T23-47-04.571636.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.7669743429877653, "acc_stderr": 0.027971495069922473, "acc_norm": 0.7715834368806984, "acc_norm_stderr": 0.028493498109494097, "mc1": 0.412484700122399, "mc1_stderr": 0.017233299399571227, "mc2": 0.596080564321232, "mc2_stderr": 0.01451800985281567 }, "harness|arc:challenge|25": { "acc": 0.6262798634812287, "acc_stderr": 0.014137708601759095, "acc_norm": 0.658703071672355, "acc_norm_stderr": 0.01385583128749773 }, "harness|hellaswag|10": { "acc": 0.6666998605855408, "acc_stderr": 0.004704293898729911, "acc_norm": 0.8598884684325832, "acc_norm_stderr": 0.003463933286063887 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.7185185185185186, "acc_stderr": 0.038850042458002526, "acc_norm": 0.7185185185185186, "acc_norm_stderr": 0.038850042458002526 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.8881578947368421, "acc_stderr": 0.02564834125169361, "acc_norm": 0.8881578947368421, "acc_norm_stderr": 0.02564834125169361 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.8, "acc_stderr": 0.04020151261036845, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036845 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.8226415094339623, "acc_stderr": 0.023508739218846938, "acc_norm": 0.8226415094339623, "acc_norm_stderr": 0.023508739218846938 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.9027777777777778, "acc_stderr": 0.024774516250440175, "acc_norm": 0.9027777777777778, "acc_norm_stderr": 0.024774516250440175 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.64, "acc_stderr": 0.048241815132442176, "acc_norm": 0.64, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.7630057803468208, "acc_stderr": 0.032424147574830975, "acc_norm": 0.7630057803468208, "acc_norm_stderr": 0.032424147574830975 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.5490196078431373, "acc_stderr": 0.049512182523962604, "acc_norm": 0.5490196078431373, "acc_norm_stderr": 0.049512182523962604 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.84, "acc_stderr": 0.03684529491774709, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774709 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.8085106382978723, "acc_stderr": 0.025722149992637798, "acc_norm": 0.8085106382978723, "acc_norm_stderr": 0.025722149992637798 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5877192982456141, "acc_stderr": 0.046306532033665956, "acc_norm": 0.5877192982456141, "acc_norm_stderr": 0.046306532033665956 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.7862068965517242, "acc_stderr": 0.03416520447747549, "acc_norm": 0.7862068965517242, "acc_norm_stderr": 0.03416520447747549 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.6984126984126984, "acc_stderr": 0.0236369759961018, "acc_norm": 0.6984126984126984, "acc_norm_stderr": 0.0236369759961018 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5793650793650794, "acc_stderr": 0.04415438226743745, "acc_norm": 0.5793650793650794, "acc_norm_stderr": 0.04415438226743745 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8838709677419355, "acc_stderr": 0.018225757949432306, "acc_norm": 0.8838709677419355, "acc_norm_stderr": 0.018225757949432306 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.6600985221674877, "acc_stderr": 0.033327690684107895, "acc_norm": 0.6600985221674877, "acc_norm_stderr": 0.033327690684107895 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.84, "acc_stderr": 0.03684529491774707, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774707 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8545454545454545, "acc_stderr": 0.027530196355066573, "acc_norm": 0.8545454545454545, "acc_norm_stderr": 0.027530196355066573 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.9292929292929293, "acc_stderr": 0.0182631054201995, "acc_norm": 0.9292929292929293, "acc_norm_stderr": 0.0182631054201995 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9896373056994818, "acc_stderr": 0.007308424386792194, "acc_norm": 0.9896373056994818, "acc_norm_stderr": 0.007308424386792194 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.8153846153846154, "acc_stderr": 0.01967163241310029, "acc_norm": 0.8153846153846154, "acc_norm_stderr": 0.01967163241310029 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.4925925925925926, "acc_stderr": 0.030482192395191506, "acc_norm": 0.4925925925925926, "acc_norm_stderr": 0.030482192395191506 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8361344537815126, "acc_stderr": 0.024044054940440488, "acc_norm": 0.8361344537815126, "acc_norm_stderr": 0.024044054940440488 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.5761589403973509, "acc_stderr": 0.04034846678603396, "acc_norm": 0.5761589403973509, "acc_norm_stderr": 0.04034846678603396 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.9302752293577982, "acc_stderr": 0.01091942641184862, "acc_norm": 0.9302752293577982, "acc_norm_stderr": 0.01091942641184862 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6851851851851852, "acc_stderr": 0.0316746870682898, "acc_norm": 0.6851851851851852, "acc_norm_stderr": 0.0316746870682898 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9313725490196079, "acc_stderr": 0.017744453647073322, "acc_norm": 0.9313725490196079, "acc_norm_stderr": 0.017744453647073322 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.9029535864978903, "acc_stderr": 0.019269323025640273, "acc_norm": 0.9029535864978903, "acc_norm_stderr": 0.019269323025640273 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7937219730941704, "acc_stderr": 0.02715715047956382, "acc_norm": 0.7937219730941704, "acc_norm_stderr": 0.02715715047956382 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8778625954198473, "acc_stderr": 0.02871877688934232, "acc_norm": 0.8778625954198473, "acc_norm_stderr": 0.02871877688934232 }, "harness|hendrycksTest-international_law|5": { "acc": 0.9090909090909091, "acc_stderr": 0.026243194054073892, "acc_norm": 0.9090909090909091, "acc_norm_stderr": 0.026243194054073892 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8518518518518519, "acc_stderr": 0.03434300243630999, "acc_norm": 0.8518518518518519, "acc_norm_stderr": 0.03434300243630999 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8711656441717791, "acc_stderr": 0.02632138319878367, "acc_norm": 0.8711656441717791, "acc_norm_stderr": 0.02632138319878367 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.6517857142857143, "acc_stderr": 0.04521829902833585, "acc_norm": 0.6517857142857143, "acc_norm_stderr": 0.04521829902833585 }, "harness|hendrycksTest-management|5": { "acc": 0.8737864077669902, "acc_stderr": 0.03288180278808628, "acc_norm": 0.8737864077669902, "acc_norm_stderr": 0.03288180278808628 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9401709401709402, "acc_stderr": 0.015537514263253874, "acc_norm": 0.9401709401709402, "acc_norm_stderr": 0.015537514263253874 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.85, "acc_stderr": 0.035887028128263734, "acc_norm": 0.85, "acc_norm_stderr": 0.035887028128263734 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.9157088122605364, "acc_stderr": 0.009934966499513786, "acc_norm": 0.9157088122605364, "acc_norm_stderr": 0.009934966499513786 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.8352601156069365, "acc_stderr": 0.019971040982442265, "acc_norm": 0.8352601156069365, "acc_norm_stderr": 0.019971040982442265 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.6346368715083799, "acc_stderr": 0.016104833880142302, "acc_norm": 0.6346368715083799, "acc_norm_stderr": 0.016104833880142302 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.8594771241830066, "acc_stderr": 0.01989943546353996, "acc_norm": 0.8594771241830066, "acc_norm_stderr": 0.01989943546353996 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.8392282958199357, "acc_stderr": 0.020862388082391888, "acc_norm": 0.8392282958199357, "acc_norm_stderr": 0.020862388082391888 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8641975308641975, "acc_stderr": 0.0190615881815054, "acc_norm": 0.8641975308641975, "acc_norm_stderr": 0.0190615881815054 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.6276595744680851, "acc_stderr": 0.028838921471251455, "acc_norm": 0.6276595744680851, "acc_norm_stderr": 0.028838921471251455 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.6121251629726207, "acc_stderr": 0.012444998309675631, "acc_norm": 0.6121251629726207, "acc_norm_stderr": 0.012444998309675631 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.8198529411764706, "acc_stderr": 0.02334516361654484, "acc_norm": 0.8198529411764706, "acc_norm_stderr": 0.02334516361654484 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.8137254901960784, "acc_stderr": 0.01575052628436337, "acc_norm": 0.8137254901960784, "acc_norm_stderr": 0.01575052628436337 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7363636363636363, "acc_stderr": 0.04220224692971987, "acc_norm": 0.7363636363636363, "acc_norm_stderr": 0.04220224692971987 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8285714285714286, "acc_stderr": 0.024127463462650156, "acc_norm": 0.8285714285714286, "acc_norm_stderr": 0.024127463462650156 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8855721393034826, "acc_stderr": 0.022509345325101706, "acc_norm": 0.8855721393034826, "acc_norm_stderr": 0.022509345325101706 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.94, "acc_stderr": 0.023868325657594194, "acc_norm": 0.94, "acc_norm_stderr": 0.023868325657594194 }, "harness|hendrycksTest-virology|5": { "acc": 0.572289156626506, "acc_stderr": 0.03851597683718533, "acc_norm": 0.572289156626506, "acc_norm_stderr": 0.03851597683718533 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8888888888888888, "acc_stderr": 0.024103384202072864, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.024103384202072864 }, "harness|truthfulqa:mc|0": { "mc1": 0.412484700122399, "mc1_stderr": 0.017233299399571227, "mc2": 0.596080564321232, "mc2_stderr": 0.01451800985281567 }, "harness|winogrande|5": { "acc": 0.8303078137332282, "acc_stderr": 0.010549542647363696 }, "harness|gsm8k|5": { "acc": 0.6573161485974223, "acc_stderr": 0.013073030230827912 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Qwen__Qwen2-beta-72B
[ "region:us" ]
2024-01-29T23:49:11+00:00
{"pretty_name": "Evaluation run of Qwen/Qwen2-beta-72B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Qwen/Qwen2-beta-72B](https://huggingface.co/Qwen/Qwen2-beta-72B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Qwen__Qwen2-beta-72B_private\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-29T23:47:04.571636](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen2-beta-72B_private/blob/main/results_2024-01-29T23-47-04.571636.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7669743429877653,\n \"acc_stderr\": 0.027971495069922473,\n \"acc_norm\": 0.7715834368806984,\n \"acc_norm_stderr\": 0.028493498109494097,\n \"mc1\": 0.412484700122399,\n \"mc1_stderr\": 0.017233299399571227,\n \"mc2\": 0.596080564321232,\n \"mc2_stderr\": 0.01451800985281567\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759095,\n \"acc_norm\": 0.658703071672355,\n \"acc_norm_stderr\": 0.01385583128749773\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6666998605855408,\n \"acc_stderr\": 0.004704293898729911,\n \"acc_norm\": 0.8598884684325832,\n \"acc_norm_stderr\": 0.003463933286063887\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7185185185185186,\n \"acc_stderr\": 0.038850042458002526,\n \"acc_norm\": 0.7185185185185186,\n \"acc_norm_stderr\": 0.038850042458002526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8881578947368421,\n \"acc_stderr\": 0.02564834125169361,\n \"acc_norm\": 0.8881578947368421,\n \"acc_norm_stderr\": 0.02564834125169361\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8226415094339623,\n \"acc_stderr\": 0.023508739218846938,\n \"acc_norm\": 0.8226415094339623,\n \"acc_norm_stderr\": 0.023508739218846938\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9027777777777778,\n \"acc_stderr\": 0.024774516250440175,\n \"acc_norm\": 0.9027777777777778,\n \"acc_norm_stderr\": 0.024774516250440175\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7630057803468208,\n \"acc_stderr\": 0.032424147574830975,\n \"acc_norm\": 0.7630057803468208,\n \"acc_norm_stderr\": 0.032424147574830975\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5490196078431373,\n \"acc_stderr\": 0.049512182523962604,\n \"acc_norm\": 0.5490196078431373,\n \"acc_norm_stderr\": 0.049512182523962604\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.8085106382978723,\n \"acc_stderr\": 0.025722149992637798,\n \"acc_norm\": 0.8085106382978723,\n \"acc_norm_stderr\": 0.025722149992637798\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5877192982456141,\n \"acc_stderr\": 0.046306532033665956,\n \"acc_norm\": 0.5877192982456141,\n \"acc_norm_stderr\": 0.046306532033665956\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7862068965517242,\n \"acc_stderr\": 0.03416520447747549,\n \"acc_norm\": 0.7862068965517242,\n \"acc_norm_stderr\": 0.03416520447747549\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.6984126984126984,\n \"acc_stderr\": 0.0236369759961018,\n \"acc_norm\": 0.6984126984126984,\n \"acc_norm_stderr\": 0.0236369759961018\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5793650793650794,\n \"acc_stderr\": 0.04415438226743745,\n \"acc_norm\": 0.5793650793650794,\n \"acc_norm_stderr\": 0.04415438226743745\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8838709677419355,\n \"acc_stderr\": 0.018225757949432306,\n \"acc_norm\": 0.8838709677419355,\n \"acc_norm_stderr\": 0.018225757949432306\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6600985221674877,\n \"acc_stderr\": 0.033327690684107895,\n \"acc_norm\": 0.6600985221674877,\n \"acc_norm_stderr\": 0.033327690684107895\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774707,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774707\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066573,\n \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066573\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9292929292929293,\n \"acc_stderr\": 0.0182631054201995,\n \"acc_norm\": 0.9292929292929293,\n \"acc_norm_stderr\": 0.0182631054201995\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9896373056994818,\n \"acc_stderr\": 0.007308424386792194,\n \"acc_norm\": 0.9896373056994818,\n \"acc_norm_stderr\": 0.007308424386792194\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8153846153846154,\n \"acc_stderr\": 0.01967163241310029,\n \"acc_norm\": 0.8153846153846154,\n \"acc_norm_stderr\": 0.01967163241310029\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.4925925925925926,\n \"acc_stderr\": 0.030482192395191506,\n \"acc_norm\": 0.4925925925925926,\n \"acc_norm_stderr\": 0.030482192395191506\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8361344537815126,\n \"acc_stderr\": 0.024044054940440488,\n \"acc_norm\": 0.8361344537815126,\n \"acc_norm_stderr\": 0.024044054940440488\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5761589403973509,\n \"acc_stderr\": 0.04034846678603396,\n \"acc_norm\": 0.5761589403973509,\n \"acc_norm_stderr\": 0.04034846678603396\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9302752293577982,\n \"acc_stderr\": 0.01091942641184862,\n \"acc_norm\": 0.9302752293577982,\n \"acc_norm_stderr\": 0.01091942641184862\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6851851851851852,\n \"acc_stderr\": 0.0316746870682898,\n \"acc_norm\": 0.6851851851851852,\n \"acc_norm_stderr\": 0.0316746870682898\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9313725490196079,\n \"acc_stderr\": 0.017744453647073322,\n \"acc_norm\": 0.9313725490196079,\n \"acc_norm_stderr\": 0.017744453647073322\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9029535864978903,\n \"acc_stderr\": 0.019269323025640273,\n \"acc_norm\": 0.9029535864978903,\n \"acc_norm_stderr\": 0.019269323025640273\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7937219730941704,\n \"acc_stderr\": 0.02715715047956382,\n \"acc_norm\": 0.7937219730941704,\n \"acc_norm_stderr\": 0.02715715047956382\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8778625954198473,\n \"acc_stderr\": 0.02871877688934232,\n \"acc_norm\": 0.8778625954198473,\n \"acc_norm_stderr\": 0.02871877688934232\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.9090909090909091,\n \"acc_stderr\": 0.026243194054073892,\n \"acc_norm\": 0.9090909090909091,\n \"acc_norm_stderr\": 0.026243194054073892\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n \"acc_stderr\": 0.03434300243630999,\n \"acc_norm\": 0.8518518518518519,\n \"acc_norm_stderr\": 0.03434300243630999\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8711656441717791,\n \"acc_stderr\": 0.02632138319878367,\n \"acc_norm\": 0.8711656441717791,\n \"acc_norm_stderr\": 0.02632138319878367\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6517857142857143,\n \"acc_stderr\": 0.04521829902833585,\n \"acc_norm\": 0.6517857142857143,\n \"acc_norm_stderr\": 0.04521829902833585\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8737864077669902,\n \"acc_stderr\": 0.03288180278808628,\n \"acc_norm\": 0.8737864077669902,\n \"acc_norm_stderr\": 0.03288180278808628\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n \"acc_stderr\": 0.015537514263253874,\n \"acc_norm\": 0.9401709401709402,\n \"acc_norm_stderr\": 0.015537514263253874\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263734,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263734\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9157088122605364,\n \"acc_stderr\": 0.009934966499513786,\n \"acc_norm\": 0.9157088122605364,\n \"acc_norm_stderr\": 0.009934966499513786\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8352601156069365,\n \"acc_stderr\": 0.019971040982442265,\n \"acc_norm\": 0.8352601156069365,\n \"acc_norm_stderr\": 0.019971040982442265\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6346368715083799,\n \"acc_stderr\": 0.016104833880142302,\n \"acc_norm\": 0.6346368715083799,\n \"acc_norm_stderr\": 0.016104833880142302\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8594771241830066,\n \"acc_stderr\": 0.01989943546353996,\n \"acc_norm\": 0.8594771241830066,\n \"acc_norm_stderr\": 0.01989943546353996\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8392282958199357,\n \"acc_stderr\": 0.020862388082391888,\n \"acc_norm\": 0.8392282958199357,\n \"acc_norm_stderr\": 0.020862388082391888\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8641975308641975,\n \"acc_stderr\": 0.0190615881815054,\n \"acc_norm\": 0.8641975308641975,\n \"acc_norm_stderr\": 0.0190615881815054\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6276595744680851,\n \"acc_stderr\": 0.028838921471251455,\n \"acc_norm\": 0.6276595744680851,\n \"acc_norm_stderr\": 0.028838921471251455\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6121251629726207,\n \"acc_stderr\": 0.012444998309675631,\n \"acc_norm\": 0.6121251629726207,\n \"acc_norm_stderr\": 0.012444998309675631\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8198529411764706,\n \"acc_stderr\": 0.02334516361654484,\n \"acc_norm\": 0.8198529411764706,\n \"acc_norm_stderr\": 0.02334516361654484\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.01575052628436337,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.01575052628436337\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8285714285714286,\n \"acc_stderr\": 0.024127463462650156,\n \"acc_norm\": 0.8285714285714286,\n \"acc_norm_stderr\": 0.024127463462650156\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n \"acc_stderr\": 0.022509345325101706,\n \"acc_norm\": 0.8855721393034826,\n \"acc_norm_stderr\": 0.022509345325101706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.94,\n \"acc_stderr\": 0.023868325657594194,\n \"acc_norm\": 0.94,\n \"acc_norm_stderr\": 0.023868325657594194\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.03851597683718533,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.03851597683718533\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.024103384202072864,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.024103384202072864\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.412484700122399,\n \"mc1_stderr\": 0.017233299399571227,\n \"mc2\": 0.596080564321232,\n \"mc2_stderr\": 0.01451800985281567\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8303078137332282,\n \"acc_stderr\": 0.010549542647363696\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6573161485974223,\n \"acc_stderr\": 0.013073030230827912\n }\n}\n```", "repo_url": "https://huggingface.co/Qwen/Qwen2-beta-72B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|arc:challenge|25_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|gsm8k|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hellaswag|10_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-29T23-47-04.571636.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["**/details_harness|winogrande|5_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-29T23-47-04.571636.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_29T23_47_04.571636", "path": ["results_2024-01-29T23-47-04.571636.parquet"]}, {"split": "latest", "path": ["results_2024-01-29T23-47-04.571636.parquet"]}]}]}
2024-01-29T23:49:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Qwen/Qwen2-beta-72B Dataset automatically created during the evaluation run of model Qwen/Qwen2-beta-72B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-29T23:47:04.571636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Qwen/Qwen2-beta-72B\n\n\n\nDataset automatically created during the evaluation run of model Qwen/Qwen2-beta-72B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T23:47:04.571636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Qwen/Qwen2-beta-72B\n\n\n\nDataset automatically created during the evaluation run of model Qwen/Qwen2-beta-72B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-29T23:47:04.571636(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
a23bc3400cd221de3f2f94a0be29418e51333fd7
# Dataset Card for "relabeled_alpacafarm_pythiasft_20K_preference_data_modelength" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Asap7772/relabeled_alpacafarm_pythiasft_20K_preference_data_modelength
[ "region:us" ]
2024-01-30T00:13:50+00:00
{"dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "alpaca_text", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "alpaca_prompt", "dtype": "string"}, {"name": "y_ref", "dtype": "string"}, {"name": "y_1", "dtype": "string"}, {"name": "y_2", "dtype": "string"}, {"name": "y_w", "dtype": "string"}, {"name": "y_w_alpaca", "dtype": "string"}, {"name": "y_l", "dtype": "string"}, {"name": "y_l_alpaca", "dtype": "string"}, {"name": "y_w_score", "dtype": "float64"}, {"name": "y_l_score", "dtype": "float64"}, {"name": "score_diff", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 177945579, "num_examples": 19000}, {"name": "test", "num_bytes": 9378616, "num_examples": 1000}], "download_size": 86092842, "dataset_size": 187324195}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-30T00:48:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "relabeled_alpacafarm_pythiasft_20K_preference_data_modelength" More Information needed
[ "# Dataset Card for \"relabeled_alpacafarm_pythiasft_20K_preference_data_modelength\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"relabeled_alpacafarm_pythiasft_20K_preference_data_modelength\"\n\nMore Information needed" ]
096faf56f5facedf92caa606b529ad675d61eb7b
# Dataset Card for Dataset Name This dataset contains Japanese-Korean paired text which is from [Helsinki-NLP/Tatoeba-Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README-v2023-09-26.md). ## Dataset Details ### Dataset Description - **Curated by:** [Helsinki-NLP](https://github.com/Helsinki-NLP) - **Language(s) (NLP):** Japanese-Korean - **License:** CC BY-NC-SA 4.0 ### Dataset Sources - **Repository:** [Helsinki-NLP/Tatoeba-Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/data/README-v2023-09-26.md) - **Detail**: Japanese - Korean [jpn-kor](https://object.pouta.csc.fi/Tatoeba-Challenge-v2023-09-26/jpn-kor.tar) ## Uses The dataset can be used to train the translation model that translates Japanese sentence to Korean. ### Out-of-Scope Use You cannot use this dataset to train the model which is to be used under commercial service. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Each dataset has two columns: `sourceString` and `targetString`, which corresponds to Japanese and Korean sentence. Check [example code](https://huggingface.co/datasets/sappho192/Tatoeba-Challenge-jpn-kor/blob/main/example.ipynb) to learn how to load the dataset. ## Dataset Creation ### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> This dataset may contain following inappropriate or explicit sentences: - personal - sensitive - private - data that reveals addresses - uniquely identifiable names or aliases - racial or ethnic origins - sexual orientations - religious beliefs - political opinions - financial or health data - etc. So use with your own risk. ## Citation **BibTeX:** ```bibtex @inproceedings{tiedemann-2020-tatoeba, title = "The {T}atoeba {T}ranslation {C}hallenge {--} {R}ealistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.wmt-1.139", pages = "1174--1182" } ``` ## Dataset Card Authors [sappho192](https://huggingface.co/sappho192) ## Dataset Card Contact Please create a thread in the community.
sappho192/Tatoeba-Challenge-jpn-kor
[ "task_categories:translation", "size_categories:10M<n<100M", "language:ja", "language:ko", "license:cc-by-nc-sa-4.0", "region:us" ]
2024-01-30T01:01:30+00:00
{"language": ["ja", "ko"], "license": "cc-by-nc-sa-4.0", "size_categories": ["10M<n<100M"], "task_categories": ["translation"]}
2024-01-30T16:51:21+00:00
[]
[ "ja", "ko" ]
TAGS #task_categories-translation #size_categories-10M<n<100M #language-Japanese #language-Korean #license-cc-by-nc-sa-4.0 #region-us
# Dataset Card for Dataset Name This dataset contains Japanese-Korean paired text which is from Helsinki-NLP/Tatoeba-Challenge. ## Dataset Details ### Dataset Description - Curated by: Helsinki-NLP - Language(s) (NLP): Japanese-Korean - License: CC BY-NC-SA 4.0 ### Dataset Sources - Repository: Helsinki-NLP/Tatoeba-Challenge - Detail: Japanese - Korean jpn-kor ## Uses The dataset can be used to train the translation model that translates Japanese sentence to Korean. ### Out-of-Scope Use You cannot use this dataset to train the model which is to be used under commercial service. ## Dataset Structure Each dataset has two columns: 'sourceString' and 'targetString', which corresponds to Japanese and Korean sentence. Check example code to learn how to load the dataset. ## Dataset Creation ### Personal and Sensitive Information This dataset may contain following inappropriate or explicit sentences: - personal - sensitive - private - data that reveals addresses - uniquely identifiable names or aliases - racial or ethnic origins - sexual orientations - religious beliefs - political opinions - financial or health data - etc. So use with your own risk. BibTeX: ## Dataset Card Authors sappho192 ## Dataset Card Contact Please create a thread in the community.
[ "# Dataset Card for Dataset Name\n\nThis dataset contains Japanese-Korean paired text which is from Helsinki-NLP/Tatoeba-Challenge.", "## Dataset Details", "### Dataset Description\n\n- Curated by: Helsinki-NLP\n- Language(s) (NLP): Japanese-Korean\n- License: CC BY-NC-SA 4.0", "### Dataset Sources\n\n- Repository: Helsinki-NLP/Tatoeba-Challenge\n- Detail: Japanese - Korean jpn-kor", "## Uses\n\nThe dataset can be used to train the translation model that translates Japanese sentence to Korean.", "### Out-of-Scope Use\n\nYou cannot use this dataset to train the model which is to be used under commercial service.", "## Dataset Structure\n\n\n\nEach dataset has two columns: 'sourceString' and 'targetString', which corresponds to Japanese and Korean sentence. \nCheck example code to learn how to load the dataset.", "## Dataset Creation", "### Personal and Sensitive Information\n\n\n\nThis dataset may contain following inappropriate or explicit sentences:\n- personal\n- sensitive\n- private\n - data that reveals addresses\n - uniquely identifiable names or aliases\n - racial or ethnic origins\n - sexual orientations\n - religious beliefs\n - political opinions\n - financial or health data\n - etc.\n\nSo use with your own risk.\n\n\nBibTeX:", "## Dataset Card Authors\n\nsappho192", "## Dataset Card Contact\n\nPlease create a thread in the community." ]
[ "TAGS\n#task_categories-translation #size_categories-10M<n<100M #language-Japanese #language-Korean #license-cc-by-nc-sa-4.0 #region-us \n", "# Dataset Card for Dataset Name\n\nThis dataset contains Japanese-Korean paired text which is from Helsinki-NLP/Tatoeba-Challenge.", "## Dataset Details", "### Dataset Description\n\n- Curated by: Helsinki-NLP\n- Language(s) (NLP): Japanese-Korean\n- License: CC BY-NC-SA 4.0", "### Dataset Sources\n\n- Repository: Helsinki-NLP/Tatoeba-Challenge\n- Detail: Japanese - Korean jpn-kor", "## Uses\n\nThe dataset can be used to train the translation model that translates Japanese sentence to Korean.", "### Out-of-Scope Use\n\nYou cannot use this dataset to train the model which is to be used under commercial service.", "## Dataset Structure\n\n\n\nEach dataset has two columns: 'sourceString' and 'targetString', which corresponds to Japanese and Korean sentence. \nCheck example code to learn how to load the dataset.", "## Dataset Creation", "### Personal and Sensitive Information\n\n\n\nThis dataset may contain following inappropriate or explicit sentences:\n- personal\n- sensitive\n- private\n - data that reveals addresses\n - uniquely identifiable names or aliases\n - racial or ethnic origins\n - sexual orientations\n - religious beliefs\n - political opinions\n - financial or health data\n - etc.\n\nSo use with your own risk.\n\n\nBibTeX:", "## Dataset Card Authors\n\nsappho192", "## Dataset Card Contact\n\nPlease create a thread in the community." ]
b64741a8eb2b4c3c4749cc61bffaf33675b6506c
# Dataset Card for Spine Segmentation: Discs, Vertebrae and Spinal Canal (SPIDER) The SPIDER data set contains lumbar spine magnetic resonance images (MRI) and segmentation masks described in the following paper: Jasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, Job L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023). *Lumbar spine segmentation in MR images: a dataset and a public benchmark.* https://arxiv.org/abs/2306.12217. The data were made publicly available through [Zenodo](https://zenodo.org/records/8009680), an open repository operated by CERN, and posted on [Grand Challenge](https://spider.grand-challenge.org/). (***Disclaimer**: I am not affiliated in any way with the aforementioned paper, researchers, or organizations. My only contribution is to curate the SPIDER data set here on Hugging Face to increase accessibility. While I have taken care to curate the data in a way that maintains the integrity of the original data, any findings using this particular data set should be validated against the original data provided by the researchers on [Zenodo](https://zenodo.org/records/8009680).*) ## Table of Contents (Placeholder--to be filled in at end of project) ## Dataset Description - **Paper:** [Lumbar spine segmentation in MR images: a dataset and a public benchmark](https://arxiv.org/abs/2306.12217) - **Repository:** [Zenodo](https://zenodo.org/records/8009680) ### Dataset Summary The dataset includes 447 sagittal T1 and T2 MRI series collected from 218 patients across four hospitals. Segmentation masks indicating the vertebrae, intervertebral discs (IVDs), and spinal canal are also included. Segmentation masks were created manually by a medical trainee under the supervision of a medical imaging expert and an experienced musculoskeletal radiologist. In addition to MR images and segmentation masks, additional metadata (e.g., scanner manufacturer, pixel bandwidth, etc.), limited patient characteristics (biological sex and age, when available), and radiological gradings indicating specific degenerative changes can be loaded with the corresponding image data. ## Dataset Structure ### Data Instances There are 447 images and corresponding segmentation masks for 218 unique patients. ### Data Fields The following list includes the data fields available for importing: - Numeric representation of image - Numeric representation of segmentation mask - vertebrae - intervertebral discs - spinal canal - Image characteristics - number of vertebrae - number of discs - Patient characteristics - biological sex - age - Scanner characteristics - manufacturer - manufacturer model name - serial number - software version - echo numbers - echo time - echo train length - flip angle - imaged nucleus - imaging frequency - inplane phase encoding direction - MR acquisition type - magnetic field strength - number of phase encoding steps - percent phase field of view - percent sampling - photometric interpretation - pixel bandwidth - pixel spacing - repetition time - specific absorption rate (SAR) - samples per pixel - scanning sequence - sequence name - series description - slice thickness - spacing between slices - specific character set - transmit coil name - window center - window width (TODO: Will add variable descriptions after proposal approval) ### Data Splits The training set contains [x] images distributed as follows: - Unique individuals: [x] - Standard sagittal T1 images: [x] - Standard sagittal T2 images: [y] - Standard sagittal T2 SPACE images: [z] - The validation set contains 87 images distributed as follows: - Unique individuals: [x] - Standard sagittal T1 images: [x] - Standard sagittal T2 images: [y] - Standard sagittal T2 SPACE images: [z] An additional hidden test set (not available through Hugging Face) is available on the [SPIDER Grand Challenge](spider.grand-challenge.org). ## Image Resolution Standard sagittal T1 and T2 image resolution ranges from 3.3 x 0.33 x 0.33 mm to 4.8 x 0.90 x 0.90 mm. Sagittal T2 SPACE sequence images had a near isotropic spatial resolution with a voxel size of 0.90 x 0.47 x 0.47 mm. [Source](https://spider.grand-challenge.org/data/) ## Dataset Curation The data have been curated to enable users to load any of the following: - Raw image files - Raw segmentation masks - Numeric representations of images in tensor format - Numeric representations of segmentation masks in tensor format - Linked patient characteristics (limited to sex and age, if available) - Linked scanner characteristics ### Source Data ### Processing Steps (Specifics to be determined, but will include:) 1. Conversion of .mha files to numeric representations 2. Linking of segmentation mask numeric representations to image files 3. Linking of patient and scanner characteristics to image files 4. Cleaning of patient and scanner characteristics ## Additional Information ### License The dataset is published under a CC-BY 4.0 license: https://creativecommons.org/licenses/by/4.0/legalcode. ### Citation Jasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, Job L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023). *Lumbar spine segmentation in MR images: a dataset and a public benchmark.* https://arxiv.org/abs/2306.12217.
cdoswald/SPIDER
[ "language:en", "license:cc-by-4.0", "medical", "MRI", "spine", "segmentation", "arxiv:2306.12217", "region:us" ]
2024-01-30T01:02:48+00:00
{"language": ["en"], "license": "cc-by-4.0", "tags": ["medical", "MRI", "spine", "segmentation"]}
2024-02-01T02:46:04+00:00
[ "2306.12217" ]
[ "en" ]
TAGS #language-English #license-cc-by-4.0 #medical #MRI #spine #segmentation #arxiv-2306.12217 #region-us
# Dataset Card for Spine Segmentation: Discs, Vertebrae and Spinal Canal (SPIDER) The SPIDER data set contains lumbar spine magnetic resonance images (MRI) and segmentation masks described in the following paper: Jasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, Job L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023). *Lumbar spine segmentation in MR images: a dataset and a public benchmark.* URL The data were made publicly available through Zenodo, an open repository operated by CERN, and posted on Grand Challenge. (*Disclaimer: I am not affiliated in any way with the aforementioned paper, researchers, or organizations. My only contribution is to curate the SPIDER data set here on Hugging Face to increase accessibility. While I have taken care to curate the data in a way that maintains the integrity of the original data, any findings using this particular data set should be validated against the original data provided by the researchers on Zenodo.*) ## Table of Contents (Placeholder--to be filled in at end of project) ## Dataset Description - Paper: Lumbar spine segmentation in MR images: a dataset and a public benchmark - Repository: Zenodo ### Dataset Summary The dataset includes 447 sagittal T1 and T2 MRI series collected from 218 patients across four hospitals. Segmentation masks indicating the vertebrae, intervertebral discs (IVDs), and spinal canal are also included. Segmentation masks were created manually by a medical trainee under the supervision of a medical imaging expert and an experienced musculoskeletal radiologist. In addition to MR images and segmentation masks, additional metadata (e.g., scanner manufacturer, pixel bandwidth, etc.), limited patient characteristics (biological sex and age, when available), and radiological gradings indicating specific degenerative changes can be loaded with the corresponding image data. ## Dataset Structure ### Data Instances There are 447 images and corresponding segmentation masks for 218 unique patients. ### Data Fields The following list includes the data fields available for importing: - Numeric representation of image - Numeric representation of segmentation mask - vertebrae - intervertebral discs - spinal canal - Image characteristics - number of vertebrae - number of discs - Patient characteristics - biological sex - age - Scanner characteristics - manufacturer - manufacturer model name - serial number - software version - echo numbers - echo time - echo train length - flip angle - imaged nucleus - imaging frequency - inplane phase encoding direction - MR acquisition type - magnetic field strength - number of phase encoding steps - percent phase field of view - percent sampling - photometric interpretation - pixel bandwidth - pixel spacing - repetition time - specific absorption rate (SAR) - samples per pixel - scanning sequence - sequence name - series description - slice thickness - spacing between slices - specific character set - transmit coil name - window center - window width (TODO: Will add variable descriptions after proposal approval) ### Data Splits The training set contains [x] images distributed as follows: - Unique individuals: [x] - Standard sagittal T1 images: [x] - Standard sagittal T2 images: [y] - Standard sagittal T2 SPACE images: [z] - The validation set contains 87 images distributed as follows: - Unique individuals: [x] - Standard sagittal T1 images: [x] - Standard sagittal T2 images: [y] - Standard sagittal T2 SPACE images: [z] An additional hidden test set (not available through Hugging Face) is available on the SPIDER Grand Challenge. ## Image Resolution Standard sagittal T1 and T2 image resolution ranges from 3.3 x 0.33 x 0.33 mm to 4.8 x 0.90 x 0.90 mm. Sagittal T2 SPACE sequence images had a near isotropic spatial resolution with a voxel size of 0.90 x 0.47 x 0.47 mm. Source ## Dataset Curation The data have been curated to enable users to load any of the following: - Raw image files - Raw segmentation masks - Numeric representations of images in tensor format - Numeric representations of segmentation masks in tensor format - Linked patient characteristics (limited to sex and age, if available) - Linked scanner characteristics ### Source Data ### Processing Steps (Specifics to be determined, but will include:) 1. Conversion of .mha files to numeric representations 2. Linking of segmentation mask numeric representations to image files 3. Linking of patient and scanner characteristics to image files 4. Cleaning of patient and scanner characteristics ## Additional Information ### License The dataset is published under a CC-BY 4.0 license: URL Jasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, Job L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023). *Lumbar spine segmentation in MR images: a dataset and a public benchmark.* URL
[ "# Dataset Card for Spine Segmentation: Discs, Vertebrae and Spinal Canal (SPIDER)\n\nThe SPIDER data set contains lumbar spine magnetic resonance images (MRI) and segmentation masks described in the following paper:\n\nJasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, \nJob L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023).\n*Lumbar spine segmentation in MR images: a dataset and a public benchmark.* URL\n\nThe data were made publicly available through Zenodo, an open repository operated by CERN, and posted on \nGrand Challenge.\n\n(*Disclaimer: I am not affiliated in any way with the aforementioned paper, researchers, or organizations. My only contribution is to curate the SPIDER data set \nhere on Hugging Face to increase accessibility. While I have taken care to curate the data in a way that maintains the integrity of the original data, any findings using this\nparticular data set should be validated against the original data provided by the researchers on Zenodo.*)", "## Table of Contents\n\n(Placeholder--to be filled in at end of project)", "## Dataset Description\n\n- Paper: Lumbar spine segmentation in MR images: a dataset and a public benchmark\n- Repository: Zenodo", "### Dataset Summary\n\nThe dataset includes 447 sagittal T1 and T2 MRI series collected from 218 patients across four hospitals.\nSegmentation masks indicating the vertebrae, intervertebral discs (IVDs), and spinal canal are also included.\nSegmentation masks were created manually by a medical trainee under the supervision of\na medical imaging expert and an experienced musculoskeletal radiologist.\n\nIn addition to MR images and segmentation masks, additional metadata (e.g., scanner manufacturer, pixel bandwidth, etc.), limited\npatient characteristics (biological sex and age, when available), and radiological gradings indicating specific degenerative\nchanges can be loaded with the corresponding image data.", "## Dataset Structure", "### Data Instances\n\nThere are 447 images and corresponding segmentation masks for 218 unique patients.", "### Data Fields\n\nThe following list includes the data fields available for importing:\n\n- Numeric representation of image\n \n- Numeric representation of segmentation mask\n - vertebrae\n - intervertebral discs\n - spinal canal\n \n- Image characteristics\n - number of vertebrae\n - number of discs\n \n- Patient characteristics\n - biological sex\n - age\n \n- Scanner characteristics\n - manufacturer\n - manufacturer model name\n - serial number\n - software version\n - echo numbers\n - echo time\n - echo train length\n - flip angle\n - imaged nucleus\n - imaging frequency\n - inplane phase encoding direction\n - MR acquisition type\n - magnetic field strength\n - number of phase encoding steps\n - percent phase field of view\n - percent sampling\n - photometric interpretation\n - pixel bandwidth\n - pixel spacing\n - repetition time\n - specific absorption rate (SAR)\n - samples per pixel\n - scanning sequence\n - sequence name\n - series description\n - slice thickness\n - spacing between slices\n - specific character set\n - transmit coil name\n - window center\n - window width\n\n(TODO: Will add variable descriptions after proposal approval)", "### Data Splits\n\nThe training set contains [x] images distributed as follows:\n\n- Unique individuals: [x]\n\n- Standard sagittal T1 images: [x]\n- Standard sagittal T2 images: [y]\n- Standard sagittal T2 SPACE images: [z]\n- \nThe validation set contains 87 images distributed as follows:\n\n- Unique individuals: [x]\n\n- Standard sagittal T1 images: [x]\n- Standard sagittal T2 images: [y]\n- Standard sagittal T2 SPACE images: [z]\n\nAn additional hidden test set (not available through Hugging Face) is available on the SPIDER Grand Challenge.", "## Image Resolution\n\nStandard sagittal T1 and T2 image resolution ranges from 3.3 x 0.33 x 0.33 mm to 4.8 x 0.90 x 0.90 mm. \nSagittal T2 SPACE sequence images had a near isotropic spatial resolution with a voxel size of 0.90 x 0.47 x 0.47 mm.\nSource", "## Dataset Curation\n\nThe data have been curated to enable users to load any of the following:\n\n- Raw image files\n- Raw segmentation masks\n- Numeric representations of images in tensor format\n- Numeric representations of segmentation masks in tensor format\n- Linked patient characteristics (limited to sex and age, if available)\n- Linked scanner characteristics", "### Source Data", "### Processing Steps\n\n(Specifics to be determined, but will include:)\n\n1. Conversion of .mha files to numeric representations\n2. Linking of segmentation mask numeric representations to image files\n3. Linking of patient and scanner characteristics to image files\n4. Cleaning of patient and scanner characteristics", "## Additional Information", "### License\n\nThe dataset is published under a CC-BY 4.0 license: URL\n\nJasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, \nJob L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023).\n*Lumbar spine segmentation in MR images: a dataset and a public benchmark.* URL" ]
[ "TAGS\n#language-English #license-cc-by-4.0 #medical #MRI #spine #segmentation #arxiv-2306.12217 #region-us \n", "# Dataset Card for Spine Segmentation: Discs, Vertebrae and Spinal Canal (SPIDER)\n\nThe SPIDER data set contains lumbar spine magnetic resonance images (MRI) and segmentation masks described in the following paper:\n\nJasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, \nJob L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023).\n*Lumbar spine segmentation in MR images: a dataset and a public benchmark.* URL\n\nThe data were made publicly available through Zenodo, an open repository operated by CERN, and posted on \nGrand Challenge.\n\n(*Disclaimer: I am not affiliated in any way with the aforementioned paper, researchers, or organizations. My only contribution is to curate the SPIDER data set \nhere on Hugging Face to increase accessibility. While I have taken care to curate the data in a way that maintains the integrity of the original data, any findings using this\nparticular data set should be validated against the original data provided by the researchers on Zenodo.*)", "## Table of Contents\n\n(Placeholder--to be filled in at end of project)", "## Dataset Description\n\n- Paper: Lumbar spine segmentation in MR images: a dataset and a public benchmark\n- Repository: Zenodo", "### Dataset Summary\n\nThe dataset includes 447 sagittal T1 and T2 MRI series collected from 218 patients across four hospitals.\nSegmentation masks indicating the vertebrae, intervertebral discs (IVDs), and spinal canal are also included.\nSegmentation masks were created manually by a medical trainee under the supervision of\na medical imaging expert and an experienced musculoskeletal radiologist.\n\nIn addition to MR images and segmentation masks, additional metadata (e.g., scanner manufacturer, pixel bandwidth, etc.), limited\npatient characteristics (biological sex and age, when available), and radiological gradings indicating specific degenerative\nchanges can be loaded with the corresponding image data.", "## Dataset Structure", "### Data Instances\n\nThere are 447 images and corresponding segmentation masks for 218 unique patients.", "### Data Fields\n\nThe following list includes the data fields available for importing:\n\n- Numeric representation of image\n \n- Numeric representation of segmentation mask\n - vertebrae\n - intervertebral discs\n - spinal canal\n \n- Image characteristics\n - number of vertebrae\n - number of discs\n \n- Patient characteristics\n - biological sex\n - age\n \n- Scanner characteristics\n - manufacturer\n - manufacturer model name\n - serial number\n - software version\n - echo numbers\n - echo time\n - echo train length\n - flip angle\n - imaged nucleus\n - imaging frequency\n - inplane phase encoding direction\n - MR acquisition type\n - magnetic field strength\n - number of phase encoding steps\n - percent phase field of view\n - percent sampling\n - photometric interpretation\n - pixel bandwidth\n - pixel spacing\n - repetition time\n - specific absorption rate (SAR)\n - samples per pixel\n - scanning sequence\n - sequence name\n - series description\n - slice thickness\n - spacing between slices\n - specific character set\n - transmit coil name\n - window center\n - window width\n\n(TODO: Will add variable descriptions after proposal approval)", "### Data Splits\n\nThe training set contains [x] images distributed as follows:\n\n- Unique individuals: [x]\n\n- Standard sagittal T1 images: [x]\n- Standard sagittal T2 images: [y]\n- Standard sagittal T2 SPACE images: [z]\n- \nThe validation set contains 87 images distributed as follows:\n\n- Unique individuals: [x]\n\n- Standard sagittal T1 images: [x]\n- Standard sagittal T2 images: [y]\n- Standard sagittal T2 SPACE images: [z]\n\nAn additional hidden test set (not available through Hugging Face) is available on the SPIDER Grand Challenge.", "## Image Resolution\n\nStandard sagittal T1 and T2 image resolution ranges from 3.3 x 0.33 x 0.33 mm to 4.8 x 0.90 x 0.90 mm. \nSagittal T2 SPACE sequence images had a near isotropic spatial resolution with a voxel size of 0.90 x 0.47 x 0.47 mm.\nSource", "## Dataset Curation\n\nThe data have been curated to enable users to load any of the following:\n\n- Raw image files\n- Raw segmentation masks\n- Numeric representations of images in tensor format\n- Numeric representations of segmentation masks in tensor format\n- Linked patient characteristics (limited to sex and age, if available)\n- Linked scanner characteristics", "### Source Data", "### Processing Steps\n\n(Specifics to be determined, but will include:)\n\n1. Conversion of .mha files to numeric representations\n2. Linking of segmentation mask numeric representations to image files\n3. Linking of patient and scanner characteristics to image files\n4. Cleaning of patient and scanner characteristics", "## Additional Information", "### License\n\nThe dataset is published under a CC-BY 4.0 license: URL\n\nJasper W. van der Graaf, Miranda L. van Hooff, Constantinus F. M. Buckens, Matthieu Rutten, \nJob L. C. van Susante, Robert Jan Kroeze, Marinus de Kleuver, Bram van Ginneken, Nikolas Lessmann. (2023).\n*Lumbar spine segmentation in MR images: a dataset and a public benchmark.* URL" ]
e3dc329ea7bd8626c6cb999f757f51f2bfec0d4f
# Dataset Card for "medqa_train_instruction_format" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jtatman/medqa_train_instruction_format
[ "region:us" ]
2024-01-30T01:42:46+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32228274, "num_examples": 10178}], "download_size": 0, "dataset_size": 32228274}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T01:47:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "medqa_train_instruction_format" More Information needed
[ "# Dataset Card for \"medqa_train_instruction_format\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"medqa_train_instruction_format\"\n\nMore Information needed" ]
7b0d0411941cf99f61ce0572a8b88732464b0907
# Abstruct This is a multi-turn conversation dataset generated from the Japanese Wikipedia dataset using Orion14B-Chat. Commercial use is possible, but the license is complicated, so please read it carefully before using it. I generated V100x4 on 200 machines in about half a week. # License 【Orion-14B Series】 Models Community License Agreement https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/ModelsCommunityLicenseAgreement # Computing ABCI https://abci.ai/ja/
shi3z/ja_conv_wikipedia_orion14B_100K
[ "task_categories:conversational", "size_categories:100K<n<1M", "language:ja", "region:us" ]
2024-01-30T01:45:05+00:00
{"language": ["ja"], "size_categories": ["100K<n<1M"], "task_categories": ["conversational"]}
2024-01-30T02:13:12+00:00
[]
[ "ja" ]
TAGS #task_categories-conversational #size_categories-100K<n<1M #language-Japanese #region-us
# Abstruct This is a multi-turn conversation dataset generated from the Japanese Wikipedia dataset using Orion14B-Chat. Commercial use is possible, but the license is complicated, so please read it carefully before using it. I generated V100x4 on 200 machines in about half a week. # License 【Orion-14B Series】 Models Community License Agreement URL # Computing ABCI URL
[ "# Abstruct\nThis is a multi-turn conversation dataset generated from the Japanese Wikipedia dataset using Orion14B-Chat. Commercial use is possible, but the license is complicated, so please read it carefully before using it.\nI generated V100x4 on 200 machines in about half a week.", "# License\n【Orion-14B Series】 Models Community License Agreement\nURL", "# Computing\nABCI\nURL" ]
[ "TAGS\n#task_categories-conversational #size_categories-100K<n<1M #language-Japanese #region-us \n", "# Abstruct\nThis is a multi-turn conversation dataset generated from the Japanese Wikipedia dataset using Orion14B-Chat. Commercial use is possible, but the license is complicated, so please read it carefully before using it.\nI generated V100x4 on 200 machines in about half a week.", "# License\n【Orion-14B Series】 Models Community License Agreement\nURL", "# Computing\nABCI\nURL" ]
71ac0d0bd1f951d2d6b70311f7d2ae404e1ffa82
# Dataset Card for Dataset Name A Dataset for Evaluating Retrieval-Augmented Generation Across Documents ### Dataset Description **MultiHop-RAG**: a QA dataset to evaluate retrieval and reasoning across documents with metadata in the RAG pipelines. It contains 2556 queries, with evidence for each query distributed across 2 to 4 documents. The queries also involve document metadata, reflecting complex scenarios commonly found in real-world RAG applications. ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Github:** [MultiHop-RAG](https://github.com/yixuantt/MultiHop-RAG) - **Paper:** [MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries](https://arxiv.org/abs/2401.15391) ## Citation **BibTeX:** ``` @misc{tang2024multihoprag, title={MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries}, author={Yixuan Tang and Yi Yang}, year={2024}, eprint={2401.15391}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
yixuantt/MultiHopRAG
[ "task_categories:question-answering", "task_categories:feature-extraction", "size_categories:1K<n<10K", "language:en", "license:odc-by", "arxiv:2401.15391", "region:us" ]
2024-01-30T02:28:02+00:00
{"language": ["en"], "license": "odc-by", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "feature-extraction"], "pretty_name": "MultiHop-RAG", "configs": [{"config_name": "MultiHopRAG", "data_files": "MultiHopRAG.json"}, {"config_name": "corpus", "data_files": "corpus.json"}]}
2024-01-30T02:49:29+00:00
[ "2401.15391" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-odc-by #arxiv-2401.15391 #region-us
# Dataset Card for Dataset Name A Dataset for Evaluating Retrieval-Augmented Generation Across Documents ### Dataset Description MultiHop-RAG: a QA dataset to evaluate retrieval and reasoning across documents with metadata in the RAG pipelines. It contains 2556 queries, with evidence for each query distributed across 2 to 4 documents. The queries also involve document metadata, reflecting complex scenarios commonly found in real-world RAG applications. ### Dataset Sources - Github: MultiHop-RAG - Paper: MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries BibTeX:
[ "# Dataset Card for Dataset Name\n\nA Dataset for Evaluating Retrieval-Augmented Generation Across Documents", "### Dataset Description\n\nMultiHop-RAG: a QA dataset to evaluate retrieval and reasoning across documents with metadata in the RAG pipelines. It contains 2556 queries, with evidence for each query distributed across 2 to 4 documents. The queries also involve document metadata, reflecting complex scenarios commonly found in real-world RAG applications.", "### Dataset Sources \n\n\n\n- Github: MultiHop-RAG\n- Paper: MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries\n\nBibTeX:" ]
[ "TAGS\n#task_categories-question-answering #task_categories-feature-extraction #size_categories-1K<n<10K #language-English #license-odc-by #arxiv-2401.15391 #region-us \n", "# Dataset Card for Dataset Name\n\nA Dataset for Evaluating Retrieval-Augmented Generation Across Documents", "### Dataset Description\n\nMultiHop-RAG: a QA dataset to evaluate retrieval and reasoning across documents with metadata in the RAG pipelines. It contains 2556 queries, with evidence for each query distributed across 2 to 4 documents. The queries also involve document metadata, reflecting complex scenarios commonly found in real-world RAG applications.", "### Dataset Sources \n\n\n\n- Github: MultiHop-RAG\n- Paper: MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries\n\nBibTeX:" ]
22d16aa46d546106db40d6511be1819b7fcabc9e
# Dataset Card for "coref_100" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joey234/coref_100
[ "region:us" ]
2024-01-30T03:37:56+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "pronoun", "dtype": "string"}, {"name": "candidates", "sequence": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 12857, "num_examples": 100}], "download_size": 11053, "dataset_size": 12857}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T03:42:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "coref_100" More Information needed
[ "# Dataset Card for \"coref_100\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"coref_100\"\n\nMore Information needed" ]
da184254e65a95e5edac36436ecd33a202c62f2f
# Dataset Card for "JigsawTest" ## Source - [TARA](https://github.com/zeyofu/TARA) ## Task - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing). ## Prompt Concise version: ``` Disclaimer: This is not to make unfair assumptions about the people in the image and you just need to give your assessment on this question. You don't need to identify the real people. You just need to analyze based on the information I gave you.\n\nGiven image1 with the lower right corner missing, can you tell which one of image2 and image3 is the missing part? \nSelect from the following choices and tell me your answer, (A) or (B)?\n(A) image2\n(B) image3 ``` Verbose version: ``` Disclaimer: This is not to make unfair assumptions about the people in the image and you just need to give your assessment on this question. You don't need to identify the real people. You just need to analyze based on the information I gave you.\n\nGiven image1 with the lower right corner missing, can you tell which one of image2 and image3 is the missing part? Imagine which image would be more appropriate to place in the missing spot. You can also carefully observe and compare the edges of the images.\nSelect from the following choices and tell me your answer, (A) or (B)?\n(A) image2\n(B) image3 ```
PerceptionEval/JigsawTest
[ "region:us" ]
2024-01-30T03:50:19+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "image3", "dtype": "image"}, {"name": "choices", "sequence": "string"}], "splits": [{"name": "val", "num_bytes": 70660171.0, "num_examples": 360}], "download_size": 70667478, "dataset_size": 70660171.0}}
2024-02-15T17:27:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "JigsawTest" ## Source - TARA ## Task - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing). ## Prompt Concise version: Verbose version:
[ "# Dataset Card for \"JigsawTest\"", "## Source\n - TARA", "## Task\n - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing).", "## Prompt\nConcise version:\n\n\nVerbose version:" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"JigsawTest\"", "## Source\n - TARA", "## Task\n - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing).", "## Prompt\nConcise version:\n\n\nVerbose version:" ]
0f930479d81a3c049d96e4ea83ddddfed8d67431
# Dataset Card for "Jigsaw" ## Source - [TARA](https://github.com/zeyofu/TARA) ## Task - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing).
PerceptionEval/Jigsaw
[ "region:us" ]
2024-01-30T03:50:23+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "image3", "dtype": "image"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "val", "num_bytes": 70661971.0, "num_examples": 360}], "download_size": 70668549, "dataset_size": 70661971.0}}
2024-02-13T14:18:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Jigsaw" ## Source - TARA ## Task - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing).
[ "# Dataset Card for \"Jigsaw\"", "## Source\n - TARA", "## Task\n - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing)." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Jigsaw\"", "## Source\n - TARA", "## Task\n - Determine which one of image2 (A) and image3 (B) can complete image1 (the lower right corner of image1 is missing)." ]
e9179fe13464b8e582cfaa390ba228e2e2fefedd
# Dataset Card for "counterfactual-babylm-pipps_and_keys_to_it_all_removal" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
[ "region:us" ]
2024-01-30T04:04:57+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 581838721, "num_examples": 11634224}, {"name": "validation", "num_bytes": 56120230, "num_examples": 1026747}], "download_size": 421689270, "dataset_size": 637958951}}
2024-01-30T04:05:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "counterfactual-babylm-pipps_and_keys_to_it_all_removal" More Information needed
[ "# Dataset Card for \"counterfactual-babylm-pipps_and_keys_to_it_all_removal\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"counterfactual-babylm-pipps_and_keys_to_it_all_removal\"\n\nMore Information needed" ]
af8f045e12b023e172a333216940e7e7e99b9171
# Dataset Card for "hf-codegen-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
treezy254/hf-codegen-v2
[ "region:us" ]
2024-01-30T05:00:47+00:00
{"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 20203861, "num_examples": 1776}], "download_size": 5922299, "dataset_size": 20203861}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T05:00:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hf-codegen-v2" More Information needed
[ "# Dataset Card for \"hf-codegen-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hf-codegen-v2\"\n\nMore Information needed" ]
0ece4e13181ea4073c2c6c5657d240342337bd8d
# Dataset Card for "hug_stack" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
treezy254/hug_stack
[ "region:us" ]
2024-01-30T05:07:26+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "metadata", "struct": [{"name": "file_path", "dtype": "string"}, {"name": "repo_id", "dtype": "string"}, {"name": "token_count", "dtype": "int64"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13359739, "num_examples": 1388}], "download_size": 3409561, "dataset_size": 13359739}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T05:07:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "hug_stack" More Information needed
[ "# Dataset Card for \"hug_stack\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"hug_stack\"\n\nMore Information needed" ]
719b2424a22865e60454858668b7ebe18d4ac2ed
# Sebut Perkataan 1. `sebut-perkataan-man` voice by [Husein Zolkepli](https://www.linkedin.com/in/husein-zolkepli/) 2. `tolong-sebut` voice by [Khalil Nooh](https://www.linkedin.com/in/khalilnooh/) 3. `sebut-perkataan-woman` voice by [Mas Aisyah Ahmad](https://www.linkedin.com/in/mas-aisyah-ahmad-b46508a9/) 4. Recorded using low-end tech microphones.
mesolitica/sebut-perkataan
[ "task_categories:automatic-speech-recognition", "language:ms", "license:mit", "region:us" ]
2024-01-30T05:32:32+00:00
{"language": ["ms"], "license": "mit", "task_categories": ["automatic-speech-recognition"]}
2024-01-30T05:33:57+00:00
[]
[ "ms" ]
TAGS #task_categories-automatic-speech-recognition #language-Malay (macrolanguage) #license-mit #region-us
# Sebut Perkataan 1. 'sebut-perkataan-man' voice by Husein Zolkepli 2. 'tolong-sebut' voice by Khalil Nooh 3. 'sebut-perkataan-woman' voice by Mas Aisyah Ahmad 4. Recorded using low-end tech microphones.
[ "# Sebut Perkataan\n\n1. 'sebut-perkataan-man' voice by Husein Zolkepli\n2. 'tolong-sebut' voice by Khalil Nooh\n3. 'sebut-perkataan-woman' voice by Mas Aisyah Ahmad\n4. Recorded using low-end tech microphones." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #language-Malay (macrolanguage) #license-mit #region-us \n", "# Sebut Perkataan\n\n1. 'sebut-perkataan-man' voice by Husein Zolkepli\n2. 'tolong-sebut' voice by Khalil Nooh\n3. 'sebut-perkataan-woman' voice by Mas Aisyah Ahmad\n4. Recorded using low-end tech microphones." ]
5828455ca392653a32e957aaa2791dca470bd5fc
Переведенная на русский язык часть датасета `Open-Orca/SlimOrca`. Заказать перевод вашего датасета на любой язык мира: https://t.me/PyWebSol
PyWebSol/ru-slimorca-300k
[ "task_categories:text-generation", "task_categories:question-answering", "task_categories:conversational", "language:ru", "license:apache-2.0", "region:us" ]
2024-01-30T05:34:49+00:00
{"language": ["ru"], "license": "apache-2.0", "task_categories": ["text-generation", "question-answering", "conversational"], "dataset_info": {"features": [{"name": "role", "sequence": "string"}, {"name": "content", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1011056619, "num_examples": 300013}], "download_size": 452519640, "dataset_size": 1011056619}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T08:14:26+00:00
[]
[ "ru" ]
TAGS #task_categories-text-generation #task_categories-question-answering #task_categories-conversational #language-Russian #license-apache-2.0 #region-us
Переведенная на русский язык часть датасета 'Open-Orca/SlimOrca'. Заказать перевод вашего датасета на любой язык мира: https://t.me/PyWebSol
[]
[ "TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #language-Russian #license-apache-2.0 #region-us \n" ]
cdec2580efc7c6fbeedcbf2cbe3b567f6e19d429
# Dataset Card for "voxceleb1_unit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/voxceleb1_unit
[ "region:us" ]
2024-01-30T05:48:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k_12bps", "path": "data/encodec_24k_12bps-*"}, {"split": "encodec_24k_1_5bps", "path": "data/encodec_24k_1_5bps-*"}, {"split": "encodec_24k_24bps", "path": "data/encodec_24k_24bps-*"}, {"split": "encodec_24k_3bps", "path": "data/encodec_24k_3bps-*"}, {"split": "encodec_24k_6bps", "path": "data/encodec_24k_6bps-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 64794258, "num_examples": 4874}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 64794258, "num_examples": 4874}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 97062226, "num_examples": 4874}, {"name": "audiodec_24k_320d", "num_bytes": 207036914, "num_examples": 4874}, {"name": "dac_16k", "num_bytes": 250043666, "num_examples": 4874}, {"name": "dac_24k", "num_bytes": 990592882, "num_examples": 4874}, {"name": "dac_44k", "num_bytes": 319279370, "num_examples": 4874}, {"name": "encodec_24k_12bps", "num_bytes": 388331762, "num_examples": 4874}, {"name": "encodec_24k_1_5bps", "num_bytes": 48699266, "num_examples": 4874}, {"name": "encodec_24k_24bps", "num_bytes": 776483186, "num_examples": 4874}, {"name": "encodec_24k_3bps", "num_bytes": 97218194, "num_examples": 4874}, {"name": "encodec_24k_6bps", "num_bytes": 194256050, "num_examples": 4874}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 518339442, "num_examples": 4874}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 518339442, "num_examples": 4874}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 518339442, "num_examples": 4874}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 260195698, "num_examples": 4874}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 518339442, "num_examples": 4874}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 260195698, "num_examples": 4874}, {"name": "speech_tokenizer_16k", "num_bytes": 129720114, "num_examples": 4874}], "download_size": 941977312, "dataset_size": 6222061310}}
2024-01-30T05:51:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "voxceleb1_unit" More Information needed
[ "# Dataset Card for \"voxceleb1_unit\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"voxceleb1_unit\"\n\nMore Information needed" ]
faccefcc79826bafcdeaa66e1523ec59eeed11f8
# Dataset Card for "dpo-mix-7k" A cocktail with our latest DPO datasets: orca pairs, ultrafeedback ratings, and capybara-dpo
argilla/dpo-mix-7k
[ "distilabel", "synthetic", "region:us" ]
2024-01-30T05:49:25+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "conversation", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}, {"name": "original_response", "dtype": "string"}, {"name": "generation_prompt", "sequence": "string"}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "new_generations", "sequence": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rating_chosen", "dtype": "int64"}, {"name": "rating_rejected", "dtype": "int64"}, {"name": "chosen_model", "dtype": "string"}, {"name": "rejected_model", "dtype": "string"}, {"name": "turns", "dtype": "int64"}, {"name": "dataset", "dtype": "string"}, {"name": "chosen-rating", "dtype": "float64"}, {"name": "chosen-model", "dtype": "string"}, {"name": "rejected-rating", "dtype": "float64"}, {"name": "rejected-model", "dtype": "string"}, {"name": "system", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "generations", "sequence": "string"}, {"name": "order", "sequence": "string"}, {"name": "labelling_model", "dtype": "string"}, {"name": "labelling_prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "raw_labelling_response", "dtype": "string"}, {"name": "rating", "sequence": "float64"}, {"name": "rationale", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "original_chosen", "dtype": "string"}, {"name": "original_rejected", "dtype": "string"}, {"name": "chosen_score", "dtype": "float64"}, {"name": "in_gsm8k_train", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 150514131.6, "num_examples": 6750}, {"name": "test", "num_bytes": 16723792.4, "num_examples": 750}], "download_size": 78242944, "dataset_size": 167237924}, "tags": ["distilabel", "synthetic"]}
2024-02-05T15:29:54+00:00
[]
[]
TAGS #distilabel #synthetic #region-us
# Dataset Card for "dpo-mix-7k" A cocktail with our latest DPO datasets: orca pairs, ultrafeedback ratings, and capybara-dpo
[ "# Dataset Card for \"dpo-mix-7k\"\n\nA cocktail with our latest DPO datasets: orca pairs, ultrafeedback ratings, and capybara-dpo" ]
[ "TAGS\n#distilabel #synthetic #region-us \n", "# Dataset Card for \"dpo-mix-7k\"\n\nA cocktail with our latest DPO datasets: orca pairs, ultrafeedback ratings, and capybara-dpo" ]
96ff8459e4f6be441d79ad488a3ebbdc63722d23
This dataset is a collect of Textbooks used for UPSC Prelims Preparation.
imvbhuvan/aspireai
[ "license:mit", "region:us" ]
2024-01-30T05:57:38+00:00
{"license": "mit"}
2024-01-30T16:49:11+00:00
[]
[]
TAGS #license-mit #region-us
This dataset is a collect of Textbooks used for UPSC Prelims Preparation.
[]
[ "TAGS\n#license-mit #region-us \n" ]
454ab4714eae8ce34342a03b2faf5e146b50be22
Synthetic separation data in musdb format for MLP episodes, constructed from SFX, music, and isolated dialogue lines using scripts here: https://github.com/effusiveperiscope/PPPDataset/blob/main/sfx.py --- language: - en ---
therealvul/MLPSepSynthdata
[ "region:us" ]
2024-01-30T06:13:49+00:00
{}
2024-02-02T20:06:26+00:00
[]
[]
TAGS #region-us
Synthetic separation data in musdb format for MLP episodes, constructed from SFX, music, and isolated dialogue lines using scripts here: URL --- language: - en ---
[]
[ "TAGS\n#region-us \n" ]
dac12a4af65322040e2e6c00c346ffc49cc8eccf
--- # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset aims to be a tool to help trace linguistic patterns in the reddit posts from members who partake in the internet centric pill ideologies, known as blackpill, red pill, blue pill. ## Dataset Details ### Dataset Description A few of the major groups' posts have been coalesced into one dataset, all from different years. There are more than 200 posts per the major pill groups on reddit (red pill rebooted, blue pill, black pill, married red pill, red pill women, and feminism as a counterpoint of reference). The group of feminism was added as a juxtaposition against red pill women, in oder to allow researchers to explore those dichotomies. For researchers, the value will be in identifying or classifying the types of words that make one ideology more prominent than the other. - **Curated by:** [steamcyclone] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [steamcyclone] - **Language(s) (NLP):** [EN] - **License:** [CC] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [This is the only source] ## Uses The main usage of this dataset is to study linguistic patterns. Running models and detecting word usage per groups, as well as overlaps across groups is an ideal use for this dataset. With the rise of the loneliness epidemic, any insights that come from this are welcome. ### Direct Use The suitable use cases are to multi-class classification, word clustering or semantic clustering per different groups, summarization modeling, text parsing, and any other natural language processing task. [More Information Needed] ### Out-of-Scope Use This dataset is not meant to be utilized to demonize or mock certain online communities for the trials in life in which individuals find themselves. If the viewer's agenda is to push forward some misandrist or misogynistic agenda, please ignore this dataset. [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> Currently, this dataset contains - subreddit of the post : string, - postid : string - title of the post: string - text of the post (where applicable) : string - url (if something was embedded) : string\ - score : int32 - author : string - date : int64 [More Information Needed] ## Dataset Creation ### Curation Rationale With the rise of the loneliness epidemic and the radicalization of internet content pitting men and women against each other, it is important to seek understanding of the root of the problem. Depending on whom you ask, you'll get a plethora of answers. Jordan Peterson describes it as some type of post-modernist feminist liberalism problem. The Andrew Tates and other conservative archetypes blame the loss of traditionalism. Others blame dating apps and its selection bias effects. Within each of the major pill ideologies, with the exception of the BlackPill, men blame women, and women blame men. Unfortunately, male spaces, as substantiated by research and media coverage, in recent years have only been able to exist on the internet, and counter-spaces have emerged to challenge the views held in the differing ideologies. In short, according to archetypical definitions - the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. - the blue pill is the satire of the red pill, often run by women. - the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth. - the pink pill is about improving the female image by augmenting sexual marketplace value. [More Information Needed] ### Source Data Each record contains a reddit post, approximately 200 per group, and has a key title and a post with words to display the intended message by the author. #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> In progress. However, the plan is to increase the amount of records and leverage the ChatGpt API to summarize the messages into categories. In addition, the dates have to be cleaned a little, in order to add use for researches. I am also not sure if I can retrieve comments per post, further augmenting the data. [More Information Needed] #### Who are the source data producers? The producers of the data are the various redditors who have participated in these spaces. [More Information Needed] ### Annotations [optional] An annotation that is not part of the collection will be the ChatGPT summarizations (future). The subreddit labels are merely the origins of the posts. #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> The origin of the posts are the labels of the records. #### Who are the annotators? I and the subreddit origin are the label annotators. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> This dataset contains no personally identifiable information with the exception of embedded youtube links. Those links may lead to videos where the impact of the content is unknown. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> A major caveat is that the pink pill and original red pill groups are shadow banned, impeding their scraping process. This is a flaw I recognize because the original red pill movement, which started in books by authors, propagated itself through its internet (reddit) variant, and it spawned all the other pills. Another bias point is that there is more red pill content, as a means to compensate for the ban of the original red pill subreddit. As such, I caution researchers to balance their datasets where necessary. [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. Remember that this dataset is not a tool for reckless and hateful political agendas. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] Pill ideologies : In short, according to archetypical definitions - the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. - the blue pill is the satire of the red pill, often run by women. - the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth. - the pink pill is about improving the female image by augmenting sexual marketplace value. ## Dataset Card Authors [optional] steamcyclone, all the redditors from the subreddits in the authors columns. ## Dataset Card Contact - N/A
steamcyclone/Pill_Ideologies-Post_Titles
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "size_categories:n<2K", "source_datasets:reddit", "language:en", "license:cc", "natural-language-understanding", "ideology classification", "text classification", "region:us" ]
2024-01-30T06:21:49+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": "cc", "size_categories": ["n<2K"], "source_datasets": ["reddit"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "PiLls", "tags": ["natural-language-understanding", "ideology classification", "text classification"], "dataset_info": {"features": [{"name": "subreddit", "dtype": "string"}, {"name": "post_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "score", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "date", "dtype": "int64"}]}}
2024-01-30T20:44:19+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-n<2K #source_datasets-reddit #language-English #license-cc #natural-language-understanding #ideology classification #text classification #region-us
--- # Dataset Card for Dataset Name This dataset aims to be a tool to help trace linguistic patterns in the reddit posts from members who partake in the internet centric pill ideologies, known as blackpill, red pill, blue pill. ## Dataset Details ### Dataset Description A few of the major groups' posts have been coalesced into one dataset, all from different years. There are more than 200 posts per the major pill groups on reddit (red pill rebooted, blue pill, black pill, married red pill, red pill women, and feminism as a counterpoint of reference). The group of feminism was added as a juxtaposition against red pill women, in oder to allow researchers to explore those dichotomies. For researchers, the value will be in identifying or classifying the types of words that make one ideology more prominent than the other. - Curated by: [steamcyclone] - Funded by [optional]: - Shared by [optional]: [steamcyclone] - Language(s) (NLP): [EN] - License: [CC] ### Dataset Sources [optional] - Repository: [This is the only source] ## Uses The main usage of this dataset is to study linguistic patterns. Running models and detecting word usage per groups, as well as overlaps across groups is an ideal use for this dataset. With the rise of the loneliness epidemic, any insights that come from this are welcome. ### Direct Use The suitable use cases are to multi-class classification, word clustering or semantic clustering per different groups, summarization modeling, text parsing, and any other natural language processing task. ### Out-of-Scope Use This dataset is not meant to be utilized to demonize or mock certain online communities for the trials in life in which individuals find themselves. If the viewer's agenda is to push forward some misandrist or misogynistic agenda, please ignore this dataset. ## Dataset Structure Currently, this dataset contains - subreddit of the post : string, - postid : string - title of the post: string - text of the post (where applicable) : string - url (if something was embedded) : string\ - score : int32 - author : string - date : int64 ## Dataset Creation ### Curation Rationale With the rise of the loneliness epidemic and the radicalization of internet content pitting men and women against each other, it is important to seek understanding of the root of the problem. Depending on whom you ask, you'll get a plethora of answers. Jordan Peterson describes it as some type of post-modernist feminist liberalism problem. The Andrew Tates and other conservative archetypes blame the loss of traditionalism. Others blame dating apps and its selection bias effects. Within each of the major pill ideologies, with the exception of the BlackPill, men blame women, and women blame men. Unfortunately, male spaces, as substantiated by research and media coverage, in recent years have only been able to exist on the internet, and counter-spaces have emerged to challenge the views held in the differing ideologies. In short, according to archetypical definitions - the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. - the blue pill is the satire of the red pill, often run by women. - the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth. - the pink pill is about improving the female image by augmenting sexual marketplace value. ### Source Data Each record contains a reddit post, approximately 200 per group, and has a key title and a post with words to display the intended message by the author. #### Data Collection and Processing In progress. However, the plan is to increase the amount of records and leverage the ChatGpt API to summarize the messages into categories. In addition, the dates have to be cleaned a little, in order to add use for researches. I am also not sure if I can retrieve comments per post, further augmenting the data. #### Who are the source data producers? The producers of the data are the various redditors who have participated in these spaces. ### Annotations [optional] An annotation that is not part of the collection will be the ChatGPT summarizations (future). The subreddit labels are merely the origins of the posts. #### Annotation process The origin of the posts are the labels of the records. #### Who are the annotators? I and the subreddit origin are the label annotators. #### Personal and Sensitive Information This dataset contains no personally identifiable information with the exception of embedded youtube links. Those links may lead to videos where the impact of the content is unknown. ## Bias, Risks, and Limitations A major caveat is that the pink pill and original red pill groups are shadow banned, impeding their scraping process. This is a flaw I recognize because the original red pill movement, which started in books by authors, propagated itself through its internet (reddit) variant, and it spawned all the other pills. Another bias point is that there is more red pill content, as a means to compensate for the ban of the original red pill subreddit. As such, I caution researchers to balance their datasets where necessary. ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. Remember that this dataset is not a tool for reckless and hateful political agendas. [optional] BibTeX: APA: ## Glossary [optional] Pill ideologies : In short, according to archetypical definitions - the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. - the blue pill is the satire of the red pill, often run by women. - the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth. - the pink pill is about improving the female image by augmenting sexual marketplace value. ## Dataset Card Authors [optional] steamcyclone, all the redditors from the subreddits in the authors columns. ## Dataset Card Contact - N/A
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset aims to be a tool to help trace linguistic patterns in the reddit posts from members who partake in the internet centric pill ideologies, known as blackpill, red pill, blue pill.", "## Dataset Details", "### Dataset Description\n\nA few of the major groups' posts have been coalesced into one dataset, all from different years. There are more than 200 posts per the major pill groups on reddit (red pill rebooted, blue pill, black pill, married red pill, red pill women, and feminism as a counterpoint of reference). The group of feminism was added as a juxtaposition against red pill women, in oder to allow researchers to explore those dichotomies. For researchers, the value will be in identifying or classifying the types of words that make one ideology more prominent than the other.\n\n- Curated by: [steamcyclone]\n- Funded by [optional]: \n- Shared by [optional]: [steamcyclone]\n- Language(s) (NLP): [EN]\n- License: [CC]", "### Dataset Sources [optional]\n\n\n\n- Repository: [This is the only source]", "## Uses\n\nThe main usage of this dataset is to study linguistic patterns. Running models and detecting word usage per groups, as well as overlaps across groups is an ideal use for this dataset. With the rise of the loneliness epidemic, any insights that come from this are welcome.", "### Direct Use\n\nThe suitable use cases are to multi-class classification, word clustering or semantic clustering per different groups, summarization modeling, text parsing, and any other natural language processing task.", "### Out-of-Scope Use\n\nThis dataset is not meant to be utilized to demonize or mock certain online communities for the trials in life in which individuals find themselves. If the viewer's agenda is to push forward some misandrist or misogynistic agenda, please ignore this dataset.", "## Dataset Structure\n\n\n\nCurrently, this dataset contains \n\n- subreddit of the post : string,\n- postid : string\n- title of the post: string\n- text of the post (where applicable) : string\n- url (if something was embedded) : string\\\n- score : int32\n- author : string\n- date : int64", "## Dataset Creation", "### Curation Rationale\n\nWith the rise of the loneliness epidemic and the radicalization of internet content pitting men and women against each other, it is important to seek understanding of the root of the problem. Depending on whom you ask, you'll get a plethora of answers. Jordan Peterson describes it as some type of post-modernist feminist liberalism problem. The Andrew Tates and other conservative archetypes blame the loss of traditionalism. Others blame dating apps and its selection bias effects. Within each of the major pill ideologies, with the exception of the BlackPill, men blame women, and women blame men. \n\nUnfortunately, male spaces, as substantiated by research and media coverage, in recent years have only been able to exist on the internet, and counter-spaces have emerged to challenge the views held in the differing ideologies.\n\nIn short, according to archetypical definitions\n- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. \n- the blue pill is the satire of the red pill, often run by women.\n- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.\n- the pink pill is about improving the female image by augmenting sexual marketplace value.", "### Source Data\n\nEach record contains a reddit post, approximately 200 per group, and has a key title and a post with words to display the intended message by the author.", "#### Data Collection and Processing\n\n\n\nIn progress. \n\nHowever, the plan is to increase the amount of records and leverage the ChatGpt API to summarize the messages into categories. In addition, the dates have to be cleaned a little, in order to add use for researches. I am also not sure if I can retrieve comments per post, further augmenting the data.", "#### Who are the source data producers?\n\nThe producers of the data are the various redditors who have participated in these spaces.", "### Annotations [optional]\n\nAn annotation that is not part of the collection will be the ChatGPT summarizations (future). The subreddit labels are merely the origins of the posts.", "#### Annotation process\n\n\n\nThe origin of the posts are the labels of the records.", "#### Who are the annotators?\n\nI and the subreddit origin are the label annotators.", "#### Personal and Sensitive Information\n\n\n\nThis dataset contains no personally identifiable information with the exception of embedded youtube links. Those links may lead to videos where the impact of the content is unknown.", "## Bias, Risks, and Limitations\n\n\n\nA major caveat is that the pink pill and original red pill groups are shadow banned, impeding their scraping process. This is a flaw I recognize because the original red pill movement, which started in books by authors, propagated itself through its internet (reddit) variant, and it spawned all the other pills.\n\nAnother bias point is that there is more red pill content, as a means to compensate for the ban of the original red pill subreddit. \n\nAs such, I caution researchers to balance their datasets where necessary.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. Remember that this dataset is not a tool for reckless and hateful political agendas.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]\n\nPill ideologies :\n\nIn short, according to archetypical definitions\n- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. \n- the blue pill is the satire of the red pill, often run by women.\n- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.\n- the pink pill is about improving the female image by augmenting sexual marketplace value.", "## Dataset Card Authors [optional]\n\nsteamcyclone, all the redditors from the subreddits in the authors columns.", "## Dataset Card Contact\n\n- N/A" ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-n<2K #source_datasets-reddit #language-English #license-cc #natural-language-understanding #ideology classification #text classification #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset aims to be a tool to help trace linguistic patterns in the reddit posts from members who partake in the internet centric pill ideologies, known as blackpill, red pill, blue pill.", "## Dataset Details", "### Dataset Description\n\nA few of the major groups' posts have been coalesced into one dataset, all from different years. There are more than 200 posts per the major pill groups on reddit (red pill rebooted, blue pill, black pill, married red pill, red pill women, and feminism as a counterpoint of reference). The group of feminism was added as a juxtaposition against red pill women, in oder to allow researchers to explore those dichotomies. For researchers, the value will be in identifying or classifying the types of words that make one ideology more prominent than the other.\n\n- Curated by: [steamcyclone]\n- Funded by [optional]: \n- Shared by [optional]: [steamcyclone]\n- Language(s) (NLP): [EN]\n- License: [CC]", "### Dataset Sources [optional]\n\n\n\n- Repository: [This is the only source]", "## Uses\n\nThe main usage of this dataset is to study linguistic patterns. Running models and detecting word usage per groups, as well as overlaps across groups is an ideal use for this dataset. With the rise of the loneliness epidemic, any insights that come from this are welcome.", "### Direct Use\n\nThe suitable use cases are to multi-class classification, word clustering or semantic clustering per different groups, summarization modeling, text parsing, and any other natural language processing task.", "### Out-of-Scope Use\n\nThis dataset is not meant to be utilized to demonize or mock certain online communities for the trials in life in which individuals find themselves. If the viewer's agenda is to push forward some misandrist or misogynistic agenda, please ignore this dataset.", "## Dataset Structure\n\n\n\nCurrently, this dataset contains \n\n- subreddit of the post : string,\n- postid : string\n- title of the post: string\n- text of the post (where applicable) : string\n- url (if something was embedded) : string\\\n- score : int32\n- author : string\n- date : int64", "## Dataset Creation", "### Curation Rationale\n\nWith the rise of the loneliness epidemic and the radicalization of internet content pitting men and women against each other, it is important to seek understanding of the root of the problem. Depending on whom you ask, you'll get a plethora of answers. Jordan Peterson describes it as some type of post-modernist feminist liberalism problem. The Andrew Tates and other conservative archetypes blame the loss of traditionalism. Others blame dating apps and its selection bias effects. Within each of the major pill ideologies, with the exception of the BlackPill, men blame women, and women blame men. \n\nUnfortunately, male spaces, as substantiated by research and media coverage, in recent years have only been able to exist on the internet, and counter-spaces have emerged to challenge the views held in the differing ideologies.\n\nIn short, according to archetypical definitions\n- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. \n- the blue pill is the satire of the red pill, often run by women.\n- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.\n- the pink pill is about improving the female image by augmenting sexual marketplace value.", "### Source Data\n\nEach record contains a reddit post, approximately 200 per group, and has a key title and a post with words to display the intended message by the author.", "#### Data Collection and Processing\n\n\n\nIn progress. \n\nHowever, the plan is to increase the amount of records and leverage the ChatGpt API to summarize the messages into categories. In addition, the dates have to be cleaned a little, in order to add use for researches. I am also not sure if I can retrieve comments per post, further augmenting the data.", "#### Who are the source data producers?\n\nThe producers of the data are the various redditors who have participated in these spaces.", "### Annotations [optional]\n\nAn annotation that is not part of the collection will be the ChatGPT summarizations (future). The subreddit labels are merely the origins of the posts.", "#### Annotation process\n\n\n\nThe origin of the posts are the labels of the records.", "#### Who are the annotators?\n\nI and the subreddit origin are the label annotators.", "#### Personal and Sensitive Information\n\n\n\nThis dataset contains no personally identifiable information with the exception of embedded youtube links. Those links may lead to videos where the impact of the content is unknown.", "## Bias, Risks, and Limitations\n\n\n\nA major caveat is that the pink pill and original red pill groups are shadow banned, impeding their scraping process. This is a flaw I recognize because the original red pill movement, which started in books by authors, propagated itself through its internet (reddit) variant, and it spawned all the other pills.\n\nAnother bias point is that there is more red pill content, as a means to compensate for the ban of the original red pill subreddit. \n\nAs such, I caution researchers to balance their datasets where necessary.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. Remember that this dataset is not a tool for reckless and hateful political agendas.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]\n\nPill ideologies :\n\nIn short, according to archetypical definitions\n- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women. \n- the blue pill is the satire of the red pill, often run by women.\n- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.\n- the pink pill is about improving the female image by augmenting sexual marketplace value.", "## Dataset Card Authors [optional]\n\nsteamcyclone, all the redditors from the subreddits in the authors columns.", "## Dataset Card Contact\n\n- N/A" ]
7db61bfc328b1a809893f837cae29d3e44e57e26
A reformatted version of the [LeoLM/OpenSchnabeltier](https://huggingface.co/datasets/LeoLM/OpenSchnabeltier) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/openschnabeltier_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-30T06:39:58+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:32:51+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A reformatted version of the LeoLM/OpenSchnabeltier dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
0fc602d3c136360262da8c65c68fa73b329e556e
A reformatted version of the [FreedomIntelligence/alpaca-gpt4-deutsch](https://huggingface.co/datasets/FreedomIntelligence/alpaca-gpt4-deutsch) dataset. Extracted from [seedboxventures/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxventures/multitask_german_examples_32k). Translation created by [seedbox ai](https://huggingface.co/seedboxai) for [KafkaLM](https://huggingface.co/seedboxai/KafkaLM-70B-German-V0.1) ❤️. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/alpaca-gpt4_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-30T06:43:54+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:33:22+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A reformatted version of the FreedomIntelligence/alpaca-gpt4-deutsch dataset. Extracted from seedboxventures/multitask_german_examples_32k. Translation created by seedbox ai for KafkaLM ️. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
fbd327c00b2dabafaaa41902e2941866bef89eca
# Dataset Card for "ultrafeedback_binarized_narrow" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Asap7772/ultrafeedback_binarized_narrow
[ "region:us" ]
2024-01-30T06:44:44+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "reward_chosen", "dtype": "float64"}, {"name": "reward_rejected", "dtype": "float64"}, {"name": "score_chosen", "dtype": "float64"}, {"name": "score_rejected", "dtype": "float64"}], "splits": [{"name": "train_prefs", "num_bytes": 184309550, "num_examples": 60672}], "download_size": 109198612, "dataset_size": 184309550}, "configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}]}]}
2024-01-30T07:15:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ultrafeedback_binarized_narrow" More Information needed
[ "# Dataset Card for \"ultrafeedback_binarized_narrow\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ultrafeedback_binarized_narrow\"\n\nMore Information needed" ]
6c329bdc99edada5ef1fd55a2b67ac3dd4c903f8
A reformatted version of the [DRXD1000/Dolly-15k-German](https://huggingface.co/datasets/DRXD1000/Dolly-15k-German) dataset. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/dolly-15k_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-30T06:49:05+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:34:34+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A reformatted version of the DRXD1000/Dolly-15k-German dataset. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
115c793eda9778cb2d425ffb754cdb8c4c6f7da1
A reformatted version of the [OpenAssistant/OASST-DE](https://huggingface.co/datasets/OpenAssistant/OASST-DE) dataset. Available for finetuning in [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
mayflowergmbh/oasst_de
[ "task_categories:text-generation", "language:de", "region:us" ]
2024-01-30T06:55:28+00:00
{"language": ["de"], "task_categories": ["text-generation"]}
2024-02-14T16:34:14+00:00
[]
[ "de" ]
TAGS #task_categories-text-generation #language-German #region-us
A reformatted version of the OpenAssistant/OASST-DE dataset. Available for finetuning in hiyouga/LLaMA-Factory.
[]
[ "TAGS\n#task_categories-text-generation #language-German #region-us \n" ]
97624645187f720d2806cefde9cee02c0ec4b0b4
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
moulichand/medformatted
[ "region:us" ]
2024-01-30T07:00:28+00:00
{}
2024-01-30T08:30:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
9851ccc4fea851b0f43c4480b8a2795a1fcc0034
# AI Generated Image for Image Classification <!-- Provide a quick summary of the dataset. --> This dataset contains AI generated images by Midjourney and Human images taken from Imagenet. The dataset is meant for Image Classification tasks. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** Deepankar Sharma
ideepankarsharma2003/AIGeneratedImages_Midjourney
[ "region:us" ]
2024-01-30T07:09:15+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "ai_gen", "1": "human"}}}}], "splits": [{"name": "train", "num_bytes": 20242565282.0, "num_examples": 18000}, {"name": "validation", "num_bytes": 21688393589.775, "num_examples": 20715}, {"name": "test", "num_bytes": 14590974798.406, "num_examples": 13354}], "download_size": 30126501705, "dataset_size": 56521933670.181}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-30T07:45:56+00:00
[]
[]
TAGS #region-us
# AI Generated Image for Image Classification This dataset contains AI generated images by Midjourney and Human images taken from Imagenet. The dataset is meant for Image Classification tasks. ## Dataset Details ### Dataset Description - Curated by: Deepankar Sharma
[ "# AI Generated Image for Image Classification\n\n\n\nThis dataset contains AI generated images by Midjourney and Human images taken from Imagenet. The dataset is meant for Image Classification tasks.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: Deepankar Sharma" ]
[ "TAGS\n#region-us \n", "# AI Generated Image for Image Classification\n\n\n\nThis dataset contains AI generated images by Midjourney and Human images taken from Imagenet. The dataset is meant for Image Classification tasks.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: Deepankar Sharma" ]
9ed3ca37922eae37ee1fdc668b39602165b6a3de
# Dataset Card for "maywell_ko_wikidata_QA_12800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CHOJW1004/maywell_ko_wikidata_QA_12800
[ "region:us" ]
2024-01-30T07:13:08+00:00
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13446050, "num_examples": 12800}], "download_size": 8131313, "dataset_size": 13446050}}
2024-01-30T07:13:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "maywell_ko_wikidata_QA_12800" More Information needed
[ "# Dataset Card for \"maywell_ko_wikidata_QA_12800\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"maywell_ko_wikidata_QA_12800\"\n\nMore Information needed" ]
8d3dca849fb9aa63671042f6fbc4597b0e7ed2b9
# Dataset Card for "keyword_ner" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nitinbhayana/keyword_ner
[ "region:us" ]
2024-01-30T07:40:49+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 104030, "num_examples": 985}], "download_size": 29995, "dataset_size": 104030}}
2024-01-30T07:40:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "keyword_ner" More Information needed
[ "# Dataset Card for \"keyword_ner\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"keyword_ner\"\n\nMore Information needed" ]
c6cbf14c86af241c502c6c6b5ed3b2f7a6a1773e
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
moneypitts/classicalarabic
[ "language:ar", "region:us" ]
2024-01-30T08:10:26+00:00
{"language": ["ar"]}
2024-01-30T08:52:04+00:00
[]
[ "ar" ]
TAGS #language-Arabic #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#language-Arabic #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
2cf74cd689979de4e6330a46d2cdacdf7989094d
## Formatting Formatting is compliant with ChatML. "input" is the context and "output" is the expected model output to train on. ## Details See the repo file `generate_dataset.py` for exactly this dataset was generated. An opinionated and filtered mix of the following datasets: - argilla/ultrafeedback-binarized-preferences-cleaned - heegyu/glaive-function-calling-v2-formatted - berkeley-nest/Nectar - argilla/distilabel-math-preference-dpo ## argilla/ultrafeedback-binarized-preferences-cleaned **Filter**: - `chosen-rating == 5` AND - `len(chosen) == 2` ## heegyu/glaive-function-calling-v2-formatted **Filter**: - `function_description != ''` **Transforms**: - Added a system message randomly selected from a pool of generic system messages. ## berkeley-nest/Nectar **Filter**: - has an answer with `rank == 1` AND - `turns > 1` AND - `good_natured == True` AND - answer.to_lower() does not start with "i'm sorry" ## argilla/distilabel-math-preference-dpo **Filter**: - `chosen-rating >= 9` **Transforms**: - Added a system message randomly selected from a pool of generic system messages. - Removed the phrase "Take a deep breath, think step by step, and give an accurate response" ## Global formatting All the above datasets were formatted to comply with ChatML.
andysalerno/rainbowfish-v1
[ "region:us" ]
2024-01-30T08:17:56+00:00
{"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 148983951, "num_examples": 69980}], "download_size": 70573434, "dataset_size": 148983951}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-08T06:06:02+00:00
[]
[]
TAGS #region-us
## Formatting Formatting is compliant with ChatML. "input" is the context and "output" is the expected model output to train on. ## Details See the repo file 'generate_dataset.py' for exactly this dataset was generated. An opinionated and filtered mix of the following datasets: - argilla/ultrafeedback-binarized-preferences-cleaned - heegyu/glaive-function-calling-v2-formatted - berkeley-nest/Nectar - argilla/distilabel-math-preference-dpo ## argilla/ultrafeedback-binarized-preferences-cleaned Filter: - 'chosen-rating == 5' AND - 'len(chosen) == 2' ## heegyu/glaive-function-calling-v2-formatted Filter: - 'function_description != ''' Transforms: - Added a system message randomly selected from a pool of generic system messages. ## berkeley-nest/Nectar Filter: - has an answer with 'rank == 1' AND - 'turns > 1' AND - 'good_natured == True' AND - answer.to_lower() does not start with "i'm sorry" ## argilla/distilabel-math-preference-dpo Filter: - 'chosen-rating >= 9' Transforms: - Added a system message randomly selected from a pool of generic system messages. - Removed the phrase "Take a deep breath, think step by step, and give an accurate response" ## Global formatting All the above datasets were formatted to comply with ChatML.
[ "## Formatting\n\nFormatting is compliant with ChatML. \"input\" is the context and \"output\" is the expected model output to train on.", "## Details\n\nSee the repo file 'generate_dataset.py' for exactly this dataset was generated.\n\nAn opinionated and filtered mix of the following datasets:\n\n- argilla/ultrafeedback-binarized-preferences-cleaned\n- heegyu/glaive-function-calling-v2-formatted\n- berkeley-nest/Nectar\n- argilla/distilabel-math-preference-dpo", "## argilla/ultrafeedback-binarized-preferences-cleaned\n\nFilter: \n- 'chosen-rating == 5' AND\n- 'len(chosen) == 2'", "## heegyu/glaive-function-calling-v2-formatted\n\nFilter: \n- 'function_description != '''\n\nTransforms: \n- Added a system message randomly selected from a pool of generic system messages.", "## berkeley-nest/Nectar\n\nFilter: \n- has an answer with 'rank == 1' AND\n- 'turns > 1' AND\n- 'good_natured == True' AND\n- answer.to_lower() does not start with \"i'm sorry\"", "## argilla/distilabel-math-preference-dpo\n\nFilter: \n- 'chosen-rating >= 9'\n\nTransforms: \n- Added a system message randomly selected from a pool of generic system messages.\n- Removed the phrase \"Take a deep breath, think step by step, and give an accurate response\"", "## Global formatting\n\nAll the above datasets were formatted to comply with ChatML." ]
[ "TAGS\n#region-us \n", "## Formatting\n\nFormatting is compliant with ChatML. \"input\" is the context and \"output\" is the expected model output to train on.", "## Details\n\nSee the repo file 'generate_dataset.py' for exactly this dataset was generated.\n\nAn opinionated and filtered mix of the following datasets:\n\n- argilla/ultrafeedback-binarized-preferences-cleaned\n- heegyu/glaive-function-calling-v2-formatted\n- berkeley-nest/Nectar\n- argilla/distilabel-math-preference-dpo", "## argilla/ultrafeedback-binarized-preferences-cleaned\n\nFilter: \n- 'chosen-rating == 5' AND\n- 'len(chosen) == 2'", "## heegyu/glaive-function-calling-v2-formatted\n\nFilter: \n- 'function_description != '''\n\nTransforms: \n- Added a system message randomly selected from a pool of generic system messages.", "## berkeley-nest/Nectar\n\nFilter: \n- has an answer with 'rank == 1' AND\n- 'turns > 1' AND\n- 'good_natured == True' AND\n- answer.to_lower() does not start with \"i'm sorry\"", "## argilla/distilabel-math-preference-dpo\n\nFilter: \n- 'chosen-rating >= 9'\n\nTransforms: \n- Added a system message randomly selected from a pool of generic system messages.\n- Removed the phrase \"Take a deep breath, think step by step, and give an accurate response\"", "## Global formatting\n\nAll the above datasets were formatted to comply with ChatML." ]
f32756e7ba589854730526745949e611b8f827ed
# Dataset Card for "orca-tr" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
malhajar/orca-tr
[ "region:us" ]
2024-01-30T08:51:50+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "system_prompt-turkish", "dtype": "string"}, {"name": "question-turkish", "dtype": "string"}, {"name": "response-turkish", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8500889145, "num_examples": 2352811}], "download_size": 4792916697, "dataset_size": 8500889145}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T10:39:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "orca-tr" More Information needed
[ "# Dataset Card for \"orca-tr\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"orca-tr\"\n\nMore Information needed" ]
bcdfa089548ff2f7298b5427ee41627e57ab59ee
This is a dataset created using [vector-io](https://github.com/ai-northstar-tech/vector-io)
aintech/vdf_20240129_162001_59275_arxiv_titles
[ "vdf", "vector-io", "vector-dataset", "vector-embeddings", "region:us" ]
2024-01-30T09:23:17+00:00
{"tags": ["vdf", "vector-io", "vector-dataset", "vector-embeddings"]}
2024-01-30T09:38:23+00:00
[]
[]
TAGS #vdf #vector-io #vector-dataset #vector-embeddings #region-us
This is a dataset created using vector-io
[]
[ "TAGS\n#vdf #vector-io #vector-dataset #vector-embeddings #region-us \n" ]
96d4da91163089616b6449d0e62f3e3ceaaf8108
This is a dataset created using [vector-io](https://github.com/ai-northstar-tech/vector-io)
aintech/vdf_20240130_114906_3faa2_arxiv_abstracts
[ "vdf", "vector-io", "vector-dataset", "vector-embeddings", "region:us" ]
2024-01-30T09:39:10+00:00
{"tags": ["vdf", "vector-io", "vector-dataset", "vector-embeddings"]}
2024-01-30T09:51:47+00:00
[]
[]
TAGS #vdf #vector-io #vector-dataset #vector-embeddings #region-us
This is a dataset created using vector-io
[]
[ "TAGS\n#vdf #vector-io #vector-dataset #vector-embeddings #region-us \n" ]
54c274b376ec1070cbcf793299a7d3ad6d3f3b9c
This is a dataset created using [vector-io](https://github.com/ai-northstar-tech/vector-io)
aintech/vdf_20240130_145922_55bb9_wolt_food
[ "vdf", "vector-io", "vector-dataset", "vector-embeddings", "region:us" ]
2024-01-30T10:19:35+00:00
{"tags": ["vdf", "vector-io", "vector-dataset", "vector-embeddings"]}
2024-01-30T10:25:33+00:00
[]
[]
TAGS #vdf #vector-io #vector-dataset #vector-embeddings #region-us
This is a dataset created using vector-io
[]
[ "TAGS\n#vdf #vector-io #vector-dataset #vector-embeddings #region-us \n" ]
de83ba77697657d8aec28113da5c3b7c81b2081a
# TeleQnA ## Introduction TeleQnA is a comprehensive dataset tailored to assess the knowledge of Large Language Models (LLMs) in the field of telecommunications. It encompasses 10,000 multiple-choice questions distributed across five distinct categories: - **Lexicon:** This category comprises 500 questions that delve into the realm of general telecom terminology and definitions. - **Research overview:** Comprising 2,000 questions, this category provides a broad overview of telecom research, spanning a wide spectrum of telecom-related topics. - **Research publications:** With 4,500 questions, this category contains detailed inquiries regarding multi-disciplanary research in telecommunications, drawing from a variety of sources such as transactions and conferences proceedings. - **Standards overview:** This category consists of 1,000 questions related to summaries of standards from multiple standarization bodies like 3GPP and IEEE. - **Standards specifications:** With 2,000 questions, this category explores the technical specifications and practical implementations of telecommunications systems, leveraging information from standardization bodies like 3GPP and IEEE. For more in-depth information about the dataset and the generation process, please refer to our paper by following this [link](https://arxiv.org/abs/2310.15051). To prevent inadvertent data contamination with models trained using GitHub data, we have implemented a password protection measure for unzipping the dataset. The password is ```teleqnadataset```. ## Dataset Format Each question is represented in JSON format, comprising five distinct fields: - **Question:** This field consists of a string that presents the question associated with a specific concept within the telecommunications domain. - **Options:** This field comprises a set of strings representing the various answer options. - **Answer:** This field contains a string that adheres to the format ’option ID: Answer’ and presents the correct response to the question. A single option is correct; however, options may include choices like “All of the Above” or “Both options 1 and 2”. - **Explanation:** This field encompasses a string that clarifies the reasoning behind the correct answer. - **Category:** This field includes a label identifying the source category (e.g., lexicon, research overview, etc.). ## Dataset Instance An example of the dataset is provided below: ``` question 2045: { "question": "What is the maximum number of eigenmodes that the MIMO channel can support? (nt is the number of transmit antennas, nr is the number of receive antennas)", "option 1": "nt", "option 2": "nr", "option 3": "min(nt, nr)", "option 4": "max(nt, nr)", "answer": "option 3: min(nt, nr)", "explanation": "The maximum number of eigenmodes that the MIMO channel can support is min(nt, nr).", "category": "Research publications" } ``` ## Experiments Code The provided code allows to evaluate the performance of OpenAI's models (e.g., GPT-3.5). To do so, follow the below steps: - Clone the repository - Unzip TeleQnA.zip using the password ```teleqnadataset``` - Install the required dependencies using the following command: ```pip install -r requirements.txt``` - Insert OpenAI's API key into the evaluation_tools script. - Run the command below ```python run.py``` Upon completion, a .txt file in JSON format is generated. This file contains the original dataset, with two additional fields added to each question: - **tested answer:** This field contains the answer chosen by the tested model. - **correct:** This field is marked as "True" when the tested answer matches the designated correct answer in the dataset. # Citation If you would like to use the data or code, please cite the paper: ``` @misc{maatouk2023teleqna, title={TeleQnA: A Benchmark Dataset to Assess Large Language Models Telecommunications Knowledge}, author={Ali Maatouk and Fadhel Ayed and Nicola Piovesan and Antonio De Domenico and Merouane Debbah and Zhi-Quan Luo}, year={2023}, eprint={2310.15051}, archivePrefix={arXiv}, primaryClass={cs.IT} } ```
netop/TeleQnA
[ "arxiv:2310.15051", "region:us" ]
2024-01-30T10:41:19+00:00
{}
2024-01-30T13:05:26+00:00
[ "2310.15051" ]
[]
TAGS #arxiv-2310.15051 #region-us
# TeleQnA ## Introduction TeleQnA is a comprehensive dataset tailored to assess the knowledge of Large Language Models (LLMs) in the field of telecommunications. It encompasses 10,000 multiple-choice questions distributed across five distinct categories: - Lexicon: This category comprises 500 questions that delve into the realm of general telecom terminology and definitions. - Research overview: Comprising 2,000 questions, this category provides a broad overview of telecom research, spanning a wide spectrum of telecom-related topics. - Research publications: With 4,500 questions, this category contains detailed inquiries regarding multi-disciplanary research in telecommunications, drawing from a variety of sources such as transactions and conferences proceedings. - Standards overview: This category consists of 1,000 questions related to summaries of standards from multiple standarization bodies like 3GPP and IEEE. - Standards specifications: With 2,000 questions, this category explores the technical specifications and practical implementations of telecommunications systems, leveraging information from standardization bodies like 3GPP and IEEE. For more in-depth information about the dataset and the generation process, please refer to our paper by following this link. To prevent inadvertent data contamination with models trained using GitHub data, we have implemented a password protection measure for unzipping the dataset. The password is . ## Dataset Format Each question is represented in JSON format, comprising five distinct fields: - Question: This field consists of a string that presents the question associated with a specific concept within the telecommunications domain. - Options: This field comprises a set of strings representing the various answer options. - Answer: This field contains a string that adheres to the format ’option ID: Answer’ and presents the correct response to the question. A single option is correct; however, options may include choices like “All of the Above” or “Both options 1 and 2”. - Explanation: This field encompasses a string that clarifies the reasoning behind the correct answer. - Category: This field includes a label identifying the source category (e.g., lexicon, research overview, etc.). ## Dataset Instance An example of the dataset is provided below: ## Experiments Code The provided code allows to evaluate the performance of OpenAI's models (e.g., GPT-3.5). To do so, follow the below steps: - Clone the repository - Unzip URL using the password - Install the required dependencies using the following command: - Insert OpenAI's API key into the evaluation_tools script. - Run the command below Upon completion, a .txt file in JSON format is generated. This file contains the original dataset, with two additional fields added to each question: - tested answer: This field contains the answer chosen by the tested model. - correct: This field is marked as "True" when the tested answer matches the designated correct answer in the dataset. If you would like to use the data or code, please cite the paper:
[ "# TeleQnA", "## Introduction\n\nTeleQnA is a comprehensive dataset tailored to assess the knowledge of Large Language Models (LLMs) in the field of telecommunications. It encompasses 10,000 multiple-choice questions distributed across five distinct categories:\n\n- Lexicon: This category comprises 500 questions that delve into the realm of general telecom terminology and definitions.\n\n- Research overview: Comprising 2,000 questions, this category provides a broad overview of telecom research, spanning a wide spectrum of telecom-related topics.\n\n- Research publications: With 4,500 questions, this category contains detailed inquiries regarding multi-disciplanary research in telecommunications, drawing from a variety of sources such as transactions and conferences proceedings.\n\n- Standards overview: This category consists of 1,000 questions related to summaries of standards from multiple standarization bodies like 3GPP and IEEE.\n\n- Standards specifications: With 2,000 questions, this category explores the technical specifications and practical implementations of telecommunications systems, leveraging information from standardization bodies like 3GPP and IEEE.\n\nFor more in-depth information about the dataset and the generation process, please refer to our paper by following this link. To prevent inadvertent data contamination with models trained using GitHub data, we have implemented a password protection measure for unzipping the dataset. The password is .", "## Dataset Format\n\nEach question is represented in JSON format, comprising five distinct fields:\n\n- Question: This field consists of a string that presents the question associated with a specific concept within the telecommunications domain.\n\n- Options: This field comprises a set of strings representing the various answer options.\n\n- Answer: This field contains a string that adheres to the format ’option ID: Answer’ and presents the correct response to the question. A single option is correct; however, options may include choices like “All of the Above” or “Both options 1 and 2”.\n\n- Explanation: This field encompasses a string that clarifies the reasoning behind the correct answer.\n\n- Category: This field includes a label identifying the source category (e.g., lexicon, research overview, etc.).", "## Dataset Instance \n\nAn example of the dataset is provided below:", "## Experiments Code\n\nThe provided code allows to evaluate the performance of OpenAI's models (e.g., GPT-3.5). To do so, follow the below steps:\n\n- Clone the repository\n- Unzip URL using the password \n- Install the required dependencies using the following command:\n\n\n\n- Insert OpenAI's API key into the evaluation_tools script.\n- Run the command below\n\n \n\nUpon completion, a .txt file in JSON format is generated. This file contains the original dataset, with two additional fields added to each question:\n\n- tested answer: This field contains the answer chosen by the tested model.\n\n- correct: This field is marked as \"True\" when the tested answer matches the designated correct answer in the dataset.\n\nIf you would like to use the data or code, please cite the paper:" ]
[ "TAGS\n#arxiv-2310.15051 #region-us \n", "# TeleQnA", "## Introduction\n\nTeleQnA is a comprehensive dataset tailored to assess the knowledge of Large Language Models (LLMs) in the field of telecommunications. It encompasses 10,000 multiple-choice questions distributed across five distinct categories:\n\n- Lexicon: This category comprises 500 questions that delve into the realm of general telecom terminology and definitions.\n\n- Research overview: Comprising 2,000 questions, this category provides a broad overview of telecom research, spanning a wide spectrum of telecom-related topics.\n\n- Research publications: With 4,500 questions, this category contains detailed inquiries regarding multi-disciplanary research in telecommunications, drawing from a variety of sources such as transactions and conferences proceedings.\n\n- Standards overview: This category consists of 1,000 questions related to summaries of standards from multiple standarization bodies like 3GPP and IEEE.\n\n- Standards specifications: With 2,000 questions, this category explores the technical specifications and practical implementations of telecommunications systems, leveraging information from standardization bodies like 3GPP and IEEE.\n\nFor more in-depth information about the dataset and the generation process, please refer to our paper by following this link. To prevent inadvertent data contamination with models trained using GitHub data, we have implemented a password protection measure for unzipping the dataset. The password is .", "## Dataset Format\n\nEach question is represented in JSON format, comprising five distinct fields:\n\n- Question: This field consists of a string that presents the question associated with a specific concept within the telecommunications domain.\n\n- Options: This field comprises a set of strings representing the various answer options.\n\n- Answer: This field contains a string that adheres to the format ’option ID: Answer’ and presents the correct response to the question. A single option is correct; however, options may include choices like “All of the Above” or “Both options 1 and 2”.\n\n- Explanation: This field encompasses a string that clarifies the reasoning behind the correct answer.\n\n- Category: This field includes a label identifying the source category (e.g., lexicon, research overview, etc.).", "## Dataset Instance \n\nAn example of the dataset is provided below:", "## Experiments Code\n\nThe provided code allows to evaluate the performance of OpenAI's models (e.g., GPT-3.5). To do so, follow the below steps:\n\n- Clone the repository\n- Unzip URL using the password \n- Install the required dependencies using the following command:\n\n\n\n- Insert OpenAI's API key into the evaluation_tools script.\n- Run the command below\n\n \n\nUpon completion, a .txt file in JSON format is generated. This file contains the original dataset, with two additional fields added to each question:\n\n- tested answer: This field contains the answer chosen by the tested model.\n\n- correct: This field is marked as \"True\" when the tested answer matches the designated correct answer in the dataset.\n\nIf you would like to use the data or code, please cite the paper:" ]
7c2f195622a3bbce5d6df1a04db5c0168b6221aa
# Dataset Card for "Promoter Validation Data" ## Table of Contents - [Overview](#overview) - [Dataset Description](#dataset-description) - [Data Collection and Processing](#data-collection-and-processing) - [Dataset Structure](#dataset-structure) - [Data Partitioning and Utilization](#data-partitioning-and-utilization) - [Dataset Splits](#dataset-splits) - [Dataset Fields](#dataset-fields) - [Use Cases & Limitations](#use-cases--limitations) - [How to Use](#how-to-use) - [Citation](#citation) - [Contact](#contact) ## Overview This dataset contains promoter sequences used for validation purposes in genetic research, focusing on prokaryotic promoters. It serves to facilitate the study of gene expression regulation, providing a comprehensive set of promoter sequences from various organisms. The dataset comprises known promoter sequences from 75 organisms, retrieved from the Prokaryotic Promoter Database (PPD), alongside non-promoter sequences obtained from the NCBI RefSeq database (specifically sampled from CDS regions). It also includes non-promoter sequences constructed via higher and zero-order Markov chains that mirror compositional characteristics of known promoters. Additionally, an independent test set, focusing on *E. coli* sigma70 promoters, was employed, curated by Cassiano and Silva-Rocha (2020). A balanced distribution approach was adopted to even out the number of positive and negative samples, with the dataset being systematically divided into training, validation, and test subsets. This stratification underpins a thorough evaluation of the model efficacy. ![Promoter Dataset Overview](https://github.com/nbrg-ppcu/prokbert/blob/main/assets/Figure4_promoter_db.png?raw=true) *Figure 1: Promoter dataset - overview* The prokaryotic promoter sequences are typically 81bp long, ensuring compatibility with most tools' input prerequisites, particularly around the putative TSS region interval $[-60, +20]$. Our positive dataset encompasses promoter sequences from various species, predominantly found on both chromosomes and plasmids. Promoters included in the independent test set, based on exact match, were excluded from the training data. Species and contigs were mapped to NCBI assembly and sequence accessions. To curate comprehensive non-promoter sequences (negative samples), we employed three strategies: 1. Using non-promoter sequences (CDS -- Coding Sequences). 2. Random sequences generated with a 3rd-order Markov chain. 3. Pure random sequences (0-order Markov chain) as proposed by Cassiano and Silva-Rocha (2020). The distribution of this composite dataset was 40% CDS, 40% Markov-derived random sequences, and 20% pure random sequences (0-order Markov chain). One practical application of promoter detection in coding sequences is to check whether an unintentional promoter is injected or can be located inside a modified or designed coding sequence region, causing disruption. To cover this use-case, we incorporated the coding regions into our training and evaluation dataset. The CDS sequences were extracted from the genomic sequences of contigs, based on annotations from NCBI. The 81bp long CDS region samples were selected based on the NCBI-provided annotations for the available contigs with respect to the underlying species. The promoter regions often contain AT-rich sequences, i.e., TATA box. To capture and model the AT-rich regions, we applied 3rd and 0 order Markov chains to generate sequence examples that reflect the compositional property of known promoters. ## Dataset Description ### Data Collection and Processing - **Data Source**: The positive samples, known promoters, are primarily drawn from the Prokaryotic Promoter Database (PPD), containing experimentally validated promoter sequences from 75 organisms. Non-promoter sequences are obtained from the NCBI RefSeq database, sampled specifically from CDS regions. - **Preprocessing**: The dataset includes non-promoter sequences constructed via higher and zero-order Markov chains, which mirror compositional characteristics of known promoters. An independent test set based on _E.coli_ sigma70 promoters is also included. ### Dataset Structure - **Dataset Splits**: The dataset is systematically divided into training, validation, and test subsets. - **Data Fields**: - `segment_id`: Unique identifier for each segment. - `ppd_original_SpeciesName`: Original species name from the PPD. - `Strand`: The strand of the DNA sequence. - `segment`: The DNA sequence of the promoter region. - `label`: The label indicating whether the sequence is a promoter or non-promoter. - `L`: Length of the DNA sequence. - `prom_class`: The class of the promoter. - `y`: Binary label indicating the presence of a promoter. ## Data Partitioning and Utilization To ensure comprehensive evaluation, the dataset was split randomly into three parts: 1. **Training set**: 80% of the total data for model development and training. 2. **Validation set**: 10% of the data, aiding in fine-tuning model parameters. 3. **Test set**: Remaining 10% of the data, crucial for unbiased model performance evaluation. ## Dataset Splits - **Training Set**: Primary dataset used for model training. - **Test Set (Sigma70)**: Independent test set focusing on _E.coli_ sigma70 promoters. - **Multispecies Set**: Additional test set including various species, ensuring generalization across different organisms. ## Use Cases & Limitations This dataset is intended for use in training models for promoter prediction and understanding gene expression regulation. Limitations include the specific focus on prokaryotic promoters and the method of sequence generation using Markov chains. ## How to Use You can load the dataset using the Hugging Face `datasets` library: ```python from datasets import load_dataset dataset = load_dataset("neuralbioinfo/bacterial_promoters") ``` ## Contact Information For any questions, feedback, or contributions regarding the datasets or ProkBERT, please feel free to reach out: - **Name**: Balázs Ligeti - **Email**: [email protected] We welcome your input and collaboration to improve our resources and research. ## Citation ```bibtex @Article{ProkBERT2024, author = {Ligeti, Balázs et al.}, journal = {Frontiers in Microbiology}, title = {{ProkBERT} family: genomic language models}, year = {2024}, volume = {14}, URL = {https://www.frontiersin.org/articles/10.3389/fmicb.2023.1331233}, DOI = {10.3389/fmicb.2023.1331233} }
neuralbioinfo/bacterial_promoters
[ "license:cc-by-nc-4.0", "region:us" ]
2024-01-30T10:52:38+00:00
{"license": "cc-by-nc-4.0", "dataset_info": {"features": [{"name": "segment_id", "dtype": "string"}, {"name": "ppd_original_SpeciesName", "dtype": "string"}, {"name": "Strand", "dtype": "string"}, {"name": "segment", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "L", "dtype": "int64"}, {"name": "prom_class", "dtype": "string"}, {"name": "y", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 36500005, "num_examples": 223092}, {"name": "test_sigma70", "num_bytes": 315288, "num_examples": 1864}, {"name": "test_multispecies", "num_bytes": 3695275, "num_examples": 22582}], "download_size": 12331840, "dataset_size": 40510568}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test_sigma70", "path": "data/test_sigma70-*"}, {"split": "test_multispecies", "path": "data/test_multispecies-*"}]}]}
2024-02-16T14:33:50+00:00
[]
[]
TAGS #license-cc-by-nc-4.0 #region-us
# Dataset Card for "Promoter Validation Data" ## Table of Contents - Overview - Dataset Description - Data Collection and Processing - Dataset Structure - Data Partitioning and Utilization - Dataset Splits - Dataset Fields - Use Cases & Limitations - How to Use - Citation - Contact ## Overview This dataset contains promoter sequences used for validation purposes in genetic research, focusing on prokaryotic promoters. It serves to facilitate the study of gene expression regulation, providing a comprehensive set of promoter sequences from various organisms. The dataset comprises known promoter sequences from 75 organisms, retrieved from the Prokaryotic Promoter Database (PPD), alongside non-promoter sequences obtained from the NCBI RefSeq database (specifically sampled from CDS regions). It also includes non-promoter sequences constructed via higher and zero-order Markov chains that mirror compositional characteristics of known promoters. Additionally, an independent test set, focusing on *E. coli* sigma70 promoters, was employed, curated by Cassiano and Silva-Rocha (2020). A balanced distribution approach was adopted to even out the number of positive and negative samples, with the dataset being systematically divided into training, validation, and test subsets. This stratification underpins a thorough evaluation of the model efficacy. !Promoter Dataset Overview *Figure 1: Promoter dataset - overview* The prokaryotic promoter sequences are typically 81bp long, ensuring compatibility with most tools' input prerequisites, particularly around the putative TSS region interval $[-60, +20]$. Our positive dataset encompasses promoter sequences from various species, predominantly found on both chromosomes and plasmids. Promoters included in the independent test set, based on exact match, were excluded from the training data. Species and contigs were mapped to NCBI assembly and sequence accessions. To curate comprehensive non-promoter sequences (negative samples), we employed three strategies: 1. Using non-promoter sequences (CDS -- Coding Sequences). 2. Random sequences generated with a 3rd-order Markov chain. 3. Pure random sequences (0-order Markov chain) as proposed by Cassiano and Silva-Rocha (2020). The distribution of this composite dataset was 40% CDS, 40% Markov-derived random sequences, and 20% pure random sequences (0-order Markov chain). One practical application of promoter detection in coding sequences is to check whether an unintentional promoter is injected or can be located inside a modified or designed coding sequence region, causing disruption. To cover this use-case, we incorporated the coding regions into our training and evaluation dataset. The CDS sequences were extracted from the genomic sequences of contigs, based on annotations from NCBI. The 81bp long CDS region samples were selected based on the NCBI-provided annotations for the available contigs with respect to the underlying species. The promoter regions often contain AT-rich sequences, i.e., TATA box. To capture and model the AT-rich regions, we applied 3rd and 0 order Markov chains to generate sequence examples that reflect the compositional property of known promoters. ## Dataset Description ### Data Collection and Processing - Data Source: The positive samples, known promoters, are primarily drawn from the Prokaryotic Promoter Database (PPD), containing experimentally validated promoter sequences from 75 organisms. Non-promoter sequences are obtained from the NCBI RefSeq database, sampled specifically from CDS regions. - Preprocessing: The dataset includes non-promoter sequences constructed via higher and zero-order Markov chains, which mirror compositional characteristics of known promoters. An independent test set based on _E.coli_ sigma70 promoters is also included. ### Dataset Structure - Dataset Splits: The dataset is systematically divided into training, validation, and test subsets. - Data Fields: - 'segment_id': Unique identifier for each segment. - 'ppd_original_SpeciesName': Original species name from the PPD. - 'Strand': The strand of the DNA sequence. - 'segment': The DNA sequence of the promoter region. - 'label': The label indicating whether the sequence is a promoter or non-promoter. - 'L': Length of the DNA sequence. - 'prom_class': The class of the promoter. - 'y': Binary label indicating the presence of a promoter. ## Data Partitioning and Utilization To ensure comprehensive evaluation, the dataset was split randomly into three parts: 1. Training set: 80% of the total data for model development and training. 2. Validation set: 10% of the data, aiding in fine-tuning model parameters. 3. Test set: Remaining 10% of the data, crucial for unbiased model performance evaluation. ## Dataset Splits - Training Set: Primary dataset used for model training. - Test Set (Sigma70): Independent test set focusing on _E.coli_ sigma70 promoters. - Multispecies Set: Additional test set including various species, ensuring generalization across different organisms. ## Use Cases & Limitations This dataset is intended for use in training models for promoter prediction and understanding gene expression regulation. Limitations include the specific focus on prokaryotic promoters and the method of sequence generation using Markov chains. ## How to Use You can load the dataset using the Hugging Face 'datasets' library: ## Contact Information For any questions, feedback, or contributions regarding the datasets or ProkBERT, please feel free to reach out: - Name: Balázs Ligeti - Email: obalasz@URL We welcome your input and collaboration to improve our resources and research. '''bibtex @Article{ProkBERT2024, author = {Ligeti, Balázs et al.}, journal = {Frontiers in Microbiology}, title = {{ProkBERT} family: genomic language models}, year = {2024}, volume = {14}, URL = {URL DOI = {10.3389/fmicb.2023.1331233} }
[ "# Dataset Card for \"Promoter Validation Data\"", "## Table of Contents\n\n- Overview\n- Dataset Description\n - Data Collection and Processing\n - Dataset Structure\n- Data Partitioning and Utilization\n- Dataset Splits\n- Dataset Fields\n- Use Cases & Limitations\n- How to Use\n- Citation\n- Contact", "## Overview\n\nThis dataset contains promoter sequences used for validation purposes in genetic research, focusing on prokaryotic promoters. It serves to facilitate the study of gene expression regulation, providing a comprehensive set of promoter sequences from various organisms.\n\nThe dataset comprises known promoter sequences from 75 organisms, retrieved from the Prokaryotic Promoter Database (PPD), alongside non-promoter sequences obtained from the NCBI RefSeq database (specifically sampled from CDS regions). It also includes non-promoter sequences constructed via higher and zero-order Markov chains that mirror compositional characteristics of known promoters. Additionally, an independent test set, focusing on *E. coli* sigma70 promoters, was employed, curated by Cassiano and Silva-Rocha (2020).\n\nA balanced distribution approach was adopted to even out the number of positive and negative samples, with the dataset being systematically divided into training, validation, and test subsets. This stratification underpins a thorough evaluation of the model efficacy.\n\n\n!Promoter Dataset Overview\n*Figure 1: Promoter dataset - overview*\n\nThe prokaryotic promoter sequences are typically 81bp long, ensuring compatibility with most tools' input prerequisites, particularly around the putative TSS region interval $[-60, +20]$. Our positive dataset encompasses promoter sequences from various species, predominantly found on both chromosomes and plasmids. Promoters included in the independent test set, based on exact match, were excluded from the training data. Species and contigs were mapped to NCBI assembly and sequence accessions.\n\nTo curate comprehensive non-promoter sequences (negative samples), we employed three strategies:\n1. Using non-promoter sequences (CDS -- Coding Sequences).\n2. Random sequences generated with a 3rd-order Markov chain.\n3. Pure random sequences (0-order Markov chain) as proposed by Cassiano and Silva-Rocha (2020).\n\nThe distribution of this composite dataset was 40% CDS, 40% Markov-derived random sequences, and 20% pure random sequences (0-order Markov chain). One practical application of promoter detection in coding sequences is to check whether an unintentional promoter is injected or can be located inside a modified or designed coding sequence region, causing disruption. To cover this use-case, we incorporated the coding regions into our training and evaluation dataset. The CDS sequences were extracted from the genomic sequences of contigs, based on annotations from NCBI. The 81bp long CDS region samples were selected based on the NCBI-provided annotations for the available contigs with respect to the underlying species. The promoter regions often contain AT-rich sequences, i.e., TATA box. To capture and model the AT-rich regions, we applied 3rd and 0 order Markov chains to generate sequence examples that reflect the compositional property of known promoters.", "## Dataset Description", "### Data Collection and Processing\n\n- Data Source: The positive samples, known promoters, are primarily drawn from the Prokaryotic Promoter Database (PPD), containing experimentally validated promoter sequences from 75 organisms. Non-promoter sequences are obtained from the NCBI RefSeq database, sampled specifically from CDS regions.\n- Preprocessing: The dataset includes non-promoter sequences constructed via higher and zero-order Markov chains, which mirror compositional characteristics of known promoters. An independent test set based on _E.coli_ sigma70 promoters is also included.", "### Dataset Structure\n\n- Dataset Splits: The dataset is systematically divided into training, validation, and test subsets.\n- Data Fields:\n - 'segment_id': Unique identifier for each segment.\n - 'ppd_original_SpeciesName': Original species name from the PPD.\n - 'Strand': The strand of the DNA sequence.\n - 'segment': The DNA sequence of the promoter region.\n - 'label': The label indicating whether the sequence is a promoter or non-promoter.\n - 'L': Length of the DNA sequence.\n - 'prom_class': The class of the promoter.\n - 'y': Binary label indicating the presence of a promoter.", "## Data Partitioning and Utilization\n\nTo ensure comprehensive evaluation, the dataset was split randomly into three parts:\n1. Training set: 80% of the total data for model development and training.\n2. Validation set: 10% of the data, aiding in fine-tuning model parameters.\n3. Test set: Remaining 10% of the data, crucial for unbiased model performance evaluation.", "## Dataset Splits\n\n- Training Set: Primary dataset used for model training.\n- Test Set (Sigma70): Independent test set focusing on _E.coli_ sigma70 promoters.\n- Multispecies Set: Additional test set including various species, ensuring generalization across different organisms.", "## Use Cases & Limitations\n\nThis dataset is intended for use in training models for promoter prediction and understanding gene expression regulation. Limitations include the specific focus on prokaryotic promoters and the method of sequence generation using Markov chains.", "## How to Use\n\nYou can load the dataset using the Hugging Face 'datasets' library:", "## Contact Information\n\nFor any questions, feedback, or contributions regarding the datasets or ProkBERT, please feel free to reach out:\n\n- Name: Balázs Ligeti\n- Email: obalasz@URL\n\nWe welcome your input and collaboration to improve our resources and research.\n\n\n\n'''bibtex\n@Article{ProkBERT2024,\n author = {Ligeti, Balázs et al.},\n journal = {Frontiers in Microbiology},\n title = {{ProkBERT} family: genomic language models},\n year = {2024},\n volume = {14},\n URL = {URL\n DOI = {10.3389/fmicb.2023.1331233}\n}" ]
[ "TAGS\n#license-cc-by-nc-4.0 #region-us \n", "# Dataset Card for \"Promoter Validation Data\"", "## Table of Contents\n\n- Overview\n- Dataset Description\n - Data Collection and Processing\n - Dataset Structure\n- Data Partitioning and Utilization\n- Dataset Splits\n- Dataset Fields\n- Use Cases & Limitations\n- How to Use\n- Citation\n- Contact", "## Overview\n\nThis dataset contains promoter sequences used for validation purposes in genetic research, focusing on prokaryotic promoters. It serves to facilitate the study of gene expression regulation, providing a comprehensive set of promoter sequences from various organisms.\n\nThe dataset comprises known promoter sequences from 75 organisms, retrieved from the Prokaryotic Promoter Database (PPD), alongside non-promoter sequences obtained from the NCBI RefSeq database (specifically sampled from CDS regions). It also includes non-promoter sequences constructed via higher and zero-order Markov chains that mirror compositional characteristics of known promoters. Additionally, an independent test set, focusing on *E. coli* sigma70 promoters, was employed, curated by Cassiano and Silva-Rocha (2020).\n\nA balanced distribution approach was adopted to even out the number of positive and negative samples, with the dataset being systematically divided into training, validation, and test subsets. This stratification underpins a thorough evaluation of the model efficacy.\n\n\n!Promoter Dataset Overview\n*Figure 1: Promoter dataset - overview*\n\nThe prokaryotic promoter sequences are typically 81bp long, ensuring compatibility with most tools' input prerequisites, particularly around the putative TSS region interval $[-60, +20]$. Our positive dataset encompasses promoter sequences from various species, predominantly found on both chromosomes and plasmids. Promoters included in the independent test set, based on exact match, were excluded from the training data. Species and contigs were mapped to NCBI assembly and sequence accessions.\n\nTo curate comprehensive non-promoter sequences (negative samples), we employed three strategies:\n1. Using non-promoter sequences (CDS -- Coding Sequences).\n2. Random sequences generated with a 3rd-order Markov chain.\n3. Pure random sequences (0-order Markov chain) as proposed by Cassiano and Silva-Rocha (2020).\n\nThe distribution of this composite dataset was 40% CDS, 40% Markov-derived random sequences, and 20% pure random sequences (0-order Markov chain). One practical application of promoter detection in coding sequences is to check whether an unintentional promoter is injected or can be located inside a modified or designed coding sequence region, causing disruption. To cover this use-case, we incorporated the coding regions into our training and evaluation dataset. The CDS sequences were extracted from the genomic sequences of contigs, based on annotations from NCBI. The 81bp long CDS region samples were selected based on the NCBI-provided annotations for the available contigs with respect to the underlying species. The promoter regions often contain AT-rich sequences, i.e., TATA box. To capture and model the AT-rich regions, we applied 3rd and 0 order Markov chains to generate sequence examples that reflect the compositional property of known promoters.", "## Dataset Description", "### Data Collection and Processing\n\n- Data Source: The positive samples, known promoters, are primarily drawn from the Prokaryotic Promoter Database (PPD), containing experimentally validated promoter sequences from 75 organisms. Non-promoter sequences are obtained from the NCBI RefSeq database, sampled specifically from CDS regions.\n- Preprocessing: The dataset includes non-promoter sequences constructed via higher and zero-order Markov chains, which mirror compositional characteristics of known promoters. An independent test set based on _E.coli_ sigma70 promoters is also included.", "### Dataset Structure\n\n- Dataset Splits: The dataset is systematically divided into training, validation, and test subsets.\n- Data Fields:\n - 'segment_id': Unique identifier for each segment.\n - 'ppd_original_SpeciesName': Original species name from the PPD.\n - 'Strand': The strand of the DNA sequence.\n - 'segment': The DNA sequence of the promoter region.\n - 'label': The label indicating whether the sequence is a promoter or non-promoter.\n - 'L': Length of the DNA sequence.\n - 'prom_class': The class of the promoter.\n - 'y': Binary label indicating the presence of a promoter.", "## Data Partitioning and Utilization\n\nTo ensure comprehensive evaluation, the dataset was split randomly into three parts:\n1. Training set: 80% of the total data for model development and training.\n2. Validation set: 10% of the data, aiding in fine-tuning model parameters.\n3. Test set: Remaining 10% of the data, crucial for unbiased model performance evaluation.", "## Dataset Splits\n\n- Training Set: Primary dataset used for model training.\n- Test Set (Sigma70): Independent test set focusing on _E.coli_ sigma70 promoters.\n- Multispecies Set: Additional test set including various species, ensuring generalization across different organisms.", "## Use Cases & Limitations\n\nThis dataset is intended for use in training models for promoter prediction and understanding gene expression regulation. Limitations include the specific focus on prokaryotic promoters and the method of sequence generation using Markov chains.", "## How to Use\n\nYou can load the dataset using the Hugging Face 'datasets' library:", "## Contact Information\n\nFor any questions, feedback, or contributions regarding the datasets or ProkBERT, please feel free to reach out:\n\n- Name: Balázs Ligeti\n- Email: obalasz@URL\n\nWe welcome your input and collaboration to improve our resources and research.\n\n\n\n'''bibtex\n@Article{ProkBERT2024,\n author = {Ligeti, Balázs et al.},\n journal = {Frontiers in Microbiology},\n title = {{ProkBERT} family: genomic language models},\n year = {2024},\n volume = {14},\n URL = {URL\n DOI = {10.3389/fmicb.2023.1331233}\n}" ]
af69ff795707da157b3ef84e654b54bda081d134
# 📄 arXivTection Dataset The arXivTection dataset serves as a benchmark designed for the task of detecting pretraining data from Large Language models. The dataset consists of 50 research papers extracted from arXiv. - 25 published in 2023: Non-Training data, "_label_" column = 0. - 25 published before 2022: Training data, "_label_" column = 1. From each paper ≈ 30 passages are extracted. Each passage is paraphrased 3 times using the Language Model Claude v2.0. <br> The "_Answer_" column indicates which of the passages is the real excerpt.<br> Passages are extracted to be on average ≈ 128 tokens in length. <br> # 🧪 Testing Models on arXivTection Our dataset is planned to be used on a Multiple-Choice-Question-Answering format. Nonetheless, it is compatible to be used with other pretraining data detection methods.<br> Our [GitHub](https://github.com/avduarte333/DE-COP_Method) repository contains example scripts to evaluate models on our dataset. <br> # 🤝 Compatibility The Multiple-Choice-Question-Answering task with our Dataset is designed to be applied to various models, such as:<br> - LLaMA-2 - Mistral - Mixtral - Chat-GPT (gpt-3.5-turbo-instruct) - GPT-3 (text-davinci-003) - Claude <br> # 🔧 Loading the Dataset ```python from datasets import load_dataset dataset = load_dataset("avduarte333/arXivTection") ``` <br> # 💬 Citation ```bibtex @misc{duarte2024decop, title={{DE-COP: Detecting Copyrighted Content in Language Models Training Data}}, author={André V. Duarte and Xuandong Zhao and Arlindo L. Oliveira and Lei Li}, year={2024}, eprint={2402.09910}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <details> <summary>🎓 Research Papers Used</summary> [1] Attanasio, G., Plaza-del-Arco, F. M., Nozza, D., & Lauscher, A. (2023). A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation. arXiv preprint arXiv:2310.12127. <br> [2] Shi, Y., Wu, L., & Shao, M. (2023). Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling. arXiv preprint arXiv:2310.15294. <br> [3] Keleg, A., Goldwater, S., & Magdy, W. (2023). ALDi: Quantifying the arabic level of dialectness of text. arXiv preprint arXiv:2310.13747. <br> [4] Su, Y., Ji, Y., Li, J., Ye, H., & Zhang, M. (2023, December). Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 12998-13011). <br> [5] Chang, Y., Lo, K., Goyal, T., & Iyyer, M. (2023). BooookScore: A systematic exploration of book-length summarization in the era of LLMs. arXiv preprint arXiv:2310.00785. <br> [6] Karamolegkou, A., Li, J., Zhou, L., & Søgaard, A. (2023). Copyright Violations and Large Language Models. arXiv preprint arXiv:2310.13771. <br> [7] Weissweiler, L., Hofmann, V., Kantharuban, A., Cai, A., Dutt, R., Hengle, A., ... & Mortensen, D. R. (2023). Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model. arXiv preprint arXiv:2310.15113. <br> [8] Li, Z., & Zhang, Y. (2023, December). Cultural Concept Adaptation on Multimodal Reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 262-276). <br> [9] Jiang, W., Mao, Q., Li, J., Lin, C., Yang, W., Deng, T., & Wang, Z. (2023). DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining. arXiv preprint arXiv:2305.12074. <br> [10] Zhu, Y., Si, J., Zhao, Y., Zhu, H., Zhou, D., & He, Y. (2023). EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data Augmentation for Multi-hop Fact Verification. arXiv preprint arXiv:2310.14508. <br> [11] Hada, R., Seth, A., Diddee, H., & Bali, K. (2023). ''Fifty Shades of Bias'': Normative Ratings of Gender Bias in GPT Generated English Text. arXiv preprint arXiv:2310.17428. <br> [12] Song, Y., & Dhariwal, P. (2023). Improved techniques for training consistency models. arXiv preprint arXiv:2310.14189. <br> [13] Xu, W., Wang, D., Pan, L., Song, Z., Freitag, M., Wang, W. Y., & Li, L. (2023). Instructscore: Towards explainable text generation evaluation with automatic feedback. arXiv preprint arXiv:2305.14282. <br> [14] Majumder, B. P., He, Z., & McAuley, J. (2022). InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions. arXiv preprint arXiv:2210.07440. <br> [15] Yang, Z., Feng, R., Zhang, H., Shen, Y., Zhu, K., Huang, L., ... & Cheng, F. (2023). Eliminating Lipschitz Singularities in Diffusion Models. arXiv preprint arXiv:2306.11251. <br> [16] Li, J., Zhang, M., Guo, P., Zhang, M., & Zhang, Y. (2023). LLM-enhanced Self-training for Cross-domain Constituency Parsing. arXiv preprint arXiv:2311.02660. <br> [17] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. arXiv preprint arXiv:2310.01801. <br> [18] Eustratiadis, P., Dudziak, Ł., Li, D., & Hospedales, T. (2023). Neural Fine-Tuning Search for Few-Shot Learning. arXiv preprint arXiv:2306.09295. <br> [19] Zhang, Y., Zhang, Y., Cui, L., & Fu, G. (2023). Non-autoregressive text editing with copy-aware latent alignments. arXiv preprint arXiv:2310.07821. <br> [20] Tu, H., Li, Y., Mi, F., & Yang, Z. (2023). ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue. arXiv preprint arXiv:2305.13602. <br> [21] Deng, Y., Zhang, W., Pan, S. J., & Bing, L. (2023). SOUL: Towards Sentiment and Opinion Understanding of Language. arXiv preprint arXiv:2310.17924. <br> [22] Singh, G., Ghosh, S., Verma, A., Painkra, C., & Ekbal, A. (2023, December). Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction (DICE) in Multimodal Online Posts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 4517-4532). <br> [23] Cao, Q., Kojima, T., Matsuo, Y., & Iwasawa, Y. (2023, December). Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8898-8913). <br> [24] Cao, H., Yuan, L., Zhang, Y., & Ng, H. T. (2023, December). Unsupervised Grammatical Error Correction Rivaling Supervised Methods. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 3072-3088). <br> [25] Xu, S., Staufer, L., Ichim, O., Heri, C., & Grabmair, M. (2023). Vechr: A dataset for explainable and robust classification of vulnerability type in the european court of human rights. arXiv preprint arXiv:2310.11368. <br> [26] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. <br> [27] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. <br> [28] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. <br> [29] Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., & Okruszek, L. (2021). Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. <br> [30] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). <br> [31] Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. <br> [32] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485-5551. <br> [33] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. <br> [34] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27. <br> [35] Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). <br> [36] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9). <br> [37] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee. <br> [38] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. <br> [39] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. <br> [40] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. <br> [41] Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. <br> [42] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. <br> [43] Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. <br> [44] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. <br> [45] Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. <br> [46] Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing. <br> [47] Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. <br> [48] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. <br> [49] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. <br> [50] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). <br> </details>
avduarte333/arXivTection
[ "task_categories:question-answering", "language:en", "license:mit", "arxiv:2402.09910", "region:us" ]
2024-01-30T10:57:09+00:00
{"language": ["en"], "license": "mit", "task_categories": ["question-answering"]}
2024-02-16T09:23:04+00:00
[ "2402.09910" ]
[ "en" ]
TAGS #task_categories-question-answering #language-English #license-mit #arxiv-2402.09910 #region-us
# arXivTection Dataset The arXivTection dataset serves as a benchmark designed for the task of detecting pretraining data from Large Language models. The dataset consists of 50 research papers extracted from arXiv. - 25 published in 2023: Non-Training data, "_label_" column = 0. - 25 published before 2022: Training data, "_label_" column = 1. From each paper ≈ 30 passages are extracted. Each passage is paraphrased 3 times using the Language Model Claude v2.0. <br> The "_Answer_" column indicates which of the passages is the real excerpt.<br> Passages are extracted to be on average ≈ 128 tokens in length. <br> # Testing Models on arXivTection Our dataset is planned to be used on a Multiple-Choice-Question-Answering format. Nonetheless, it is compatible to be used with other pretraining data detection methods.<br> Our GitHub repository contains example scripts to evaluate models on our dataset. <br> # Compatibility The Multiple-Choice-Question-Answering task with our Dataset is designed to be applied to various models, such as:<br> - LLaMA-2 - Mistral - Mixtral - Chat-GPT (gpt-3.5-turbo-instruct) - GPT-3 (text-davinci-003) - Claude <br> # Loading the Dataset <br> # Citation <details> <summary> Research Papers Used</summary> [1] Attanasio, G., Plaza-del-Arco, F. M., Nozza, D., & Lauscher, A. (2023). A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation. arXiv preprint arXiv:2310.12127. <br> [2] Shi, Y., Wu, L., & Shao, M. (2023). Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling. arXiv preprint arXiv:2310.15294. <br> [3] Keleg, A., Goldwater, S., & Magdy, W. (2023). ALDi: Quantifying the arabic level of dialectness of text. arXiv preprint arXiv:2310.13747. <br> [4] Su, Y., Ji, Y., Li, J., Ye, H., & Zhang, M. (2023, December). Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 12998-13011). <br> [5] Chang, Y., Lo, K., Goyal, T., & Iyyer, M. (2023). BooookScore: A systematic exploration of book-length summarization in the era of LLMs. arXiv preprint arXiv:2310.00785. <br> [6] Karamolegkou, A., Li, J., Zhou, L., & Søgaard, A. (2023). Copyright Violations and Large Language Models. arXiv preprint arXiv:2310.13771. <br> [7] Weissweiler, L., Hofmann, V., Kantharuban, A., Cai, A., Dutt, R., Hengle, A., ... & Mortensen, D. R. (2023). Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model. arXiv preprint arXiv:2310.15113. <br> [8] Li, Z., & Zhang, Y. (2023, December). Cultural Concept Adaptation on Multimodal Reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 262-276). <br> [9] Jiang, W., Mao, Q., Li, J., Lin, C., Yang, W., Deng, T., & Wang, Z. (2023). DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining. arXiv preprint arXiv:2305.12074. <br> [10] Zhu, Y., Si, J., Zhao, Y., Zhu, H., Zhou, D., & He, Y. (2023). EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data Augmentation for Multi-hop Fact Verification. arXiv preprint arXiv:2310.14508. <br> [11] Hada, R., Seth, A., Diddee, H., & Bali, K. (2023). ''Fifty Shades of Bias'': Normative Ratings of Gender Bias in GPT Generated English Text. arXiv preprint arXiv:2310.17428. <br> [12] Song, Y., & Dhariwal, P. (2023). Improved techniques for training consistency models. arXiv preprint arXiv:2310.14189. <br> [13] Xu, W., Wang, D., Pan, L., Song, Z., Freitag, M., Wang, W. Y., & Li, L. (2023). Instructscore: Towards explainable text generation evaluation with automatic feedback. arXiv preprint arXiv:2305.14282. <br> [14] Majumder, B. P., He, Z., & McAuley, J. (2022). InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions. arXiv preprint arXiv:2210.07440. <br> [15] Yang, Z., Feng, R., Zhang, H., Shen, Y., Zhu, K., Huang, L., ... & Cheng, F. (2023). Eliminating Lipschitz Singularities in Diffusion Models. arXiv preprint arXiv:2306.11251. <br> [16] Li, J., Zhang, M., Guo, P., Zhang, M., & Zhang, Y. (2023). LLM-enhanced Self-training for Cross-domain Constituency Parsing. arXiv preprint arXiv:2311.02660. <br> [17] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. arXiv preprint arXiv:2310.01801. <br> [18] Eustratiadis, P., Dudziak, Ł., Li, D., & Hospedales, T. (2023). Neural Fine-Tuning Search for Few-Shot Learning. arXiv preprint arXiv:2306.09295. <br> [19] Zhang, Y., Zhang, Y., Cui, L., & Fu, G. (2023). Non-autoregressive text editing with copy-aware latent alignments. arXiv preprint arXiv:2310.07821. <br> [20] Tu, H., Li, Y., Mi, F., & Yang, Z. (2023). ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue. arXiv preprint arXiv:2305.13602. <br> [21] Deng, Y., Zhang, W., Pan, S. J., & Bing, L. (2023). SOUL: Towards Sentiment and Opinion Understanding of Language. arXiv preprint arXiv:2310.17924. <br> [22] Singh, G., Ghosh, S., Verma, A., Painkra, C., & Ekbal, A. (2023, December). Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction (DICE) in Multimodal Online Posts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 4517-4532). <br> [23] Cao, Q., Kojima, T., Matsuo, Y., & Iwasawa, Y. (2023, December). Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8898-8913). <br> [24] Cao, H., Yuan, L., Zhang, Y., & Ng, H. T. (2023, December). Unsupervised Grammatical Error Correction Rivaling Supervised Methods. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 3072-3088). <br> [25] Xu, S., Staufer, L., Ichim, O., Heri, C., & Grabmair, M. (2023). Vechr: A dataset for explainable and robust classification of vulnerability type in the european court of human rights. arXiv preprint arXiv:2310.11368. <br> [26] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. <br> [27] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. <br> [28] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. <br> [29] Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., & Okruszek, L. (2021). Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. <br> [30] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). <br> [31] Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. <br> [32] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485-5551. <br> [33] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. <br> [34] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27. <br> [35] Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). <br> [36] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9). <br> [37] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee. <br> [38] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. <br> [39] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. <br> [40] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. <br> [41] Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. <br> [42] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. <br> [43] Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. <br> [44] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. <br> [45] Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. <br> [46] Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing. <br> [47] Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. <br> [48] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. <br> [49] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. <br> [50] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). <br> </details>
[ "# arXivTection Dataset\nThe arXivTection dataset serves as a benchmark designed for the task of detecting pretraining data from Large Language models.\n\nThe dataset consists of 50 research papers extracted from arXiv. \n- 25 published in 2023: Non-Training data, \"_label_\" column = 0.\n- 25 published before 2022: Training data, \"_label_\" column = 1.\n\nFrom each paper ≈ 30 passages are extracted. Each passage is paraphrased 3 times using the Language Model Claude v2.0. <br>\nThe \"_Answer_\" column indicates which of the passages is the real excerpt.<br>\nPassages are extracted to be on average ≈ 128 tokens in length.\n\n<br>", "# Testing Models on arXivTection\nOur dataset is planned to be used on a Multiple-Choice-Question-Answering format. Nonetheless, it is compatible to be used with other pretraining data detection methods.<br>\nOur GitHub repository contains example scripts to evaluate models on our dataset.\n\n<br>", "# Compatibility\nThe Multiple-Choice-Question-Answering task with our Dataset is designed to be applied to various models, such as:<br>\n- LLaMA-2\n- Mistral\n- Mixtral\n- Chat-GPT (gpt-3.5-turbo-instruct)\n- GPT-3 (text-davinci-003)\n- Claude \n\n<br>", "# Loading the Dataset\n\n\n\n<br>", "# Citation\n\n\n\n<details>\n <summary> Research Papers Used</summary>\n [1] Attanasio, G., Plaza-del-Arco, F. M., Nozza, D., & Lauscher, A. (2023). A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation. arXiv preprint arXiv:2310.12127. <br>\n [2] Shi, Y., Wu, L., & Shao, M. (2023). Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling. arXiv preprint arXiv:2310.15294. <br>\n [3] Keleg, A., Goldwater, S., & Magdy, W. (2023). ALDi: Quantifying the arabic level of dialectness of text. arXiv preprint arXiv:2310.13747. <br>\n [4] Su, Y., Ji, Y., Li, J., Ye, H., & Zhang, M. (2023, December). Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 12998-13011). <br>\n [5] Chang, Y., Lo, K., Goyal, T., & Iyyer, M. (2023). BooookScore: A systematic exploration of book-length summarization in the era of LLMs. arXiv preprint arXiv:2310.00785. <br>\n [6] Karamolegkou, A., Li, J., Zhou, L., & Søgaard, A. (2023). Copyright Violations and Large Language Models. arXiv preprint arXiv:2310.13771. <br>\n [7] Weissweiler, L., Hofmann, V., Kantharuban, A., Cai, A., Dutt, R., Hengle, A., ... & Mortensen, D. R. (2023). Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model. arXiv preprint arXiv:2310.15113. <br>\n [8] Li, Z., & Zhang, Y. (2023, December). Cultural Concept Adaptation on Multimodal Reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 262-276). <br>\n [9] Jiang, W., Mao, Q., Li, J., Lin, C., Yang, W., Deng, T., & Wang, Z. (2023). DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining. arXiv preprint arXiv:2305.12074. <br>\n [10] Zhu, Y., Si, J., Zhao, Y., Zhu, H., Zhou, D., & He, Y. (2023). EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data Augmentation for Multi-hop Fact Verification. arXiv preprint arXiv:2310.14508. <br>\n [11] Hada, R., Seth, A., Diddee, H., & Bali, K. (2023). ''Fifty Shades of Bias'': Normative Ratings of Gender Bias in GPT Generated English Text. arXiv preprint arXiv:2310.17428. <br>\n [12] Song, Y., & Dhariwal, P. (2023). Improved techniques for training consistency models. arXiv preprint arXiv:2310.14189. <br>\n [13] Xu, W., Wang, D., Pan, L., Song, Z., Freitag, M., Wang, W. Y., & Li, L. (2023). Instructscore: Towards explainable text generation evaluation with automatic feedback. arXiv preprint arXiv:2305.14282. <br>\n [14] Majumder, B. P., He, Z., & McAuley, J. (2022). InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions. arXiv preprint arXiv:2210.07440. <br>\n [15] Yang, Z., Feng, R., Zhang, H., Shen, Y., Zhu, K., Huang, L., ... & Cheng, F. (2023). Eliminating Lipschitz Singularities in Diffusion Models. arXiv preprint arXiv:2306.11251. <br>\n [16] Li, J., Zhang, M., Guo, P., Zhang, M., & Zhang, Y. (2023). LLM-enhanced Self-training for Cross-domain Constituency Parsing. arXiv preprint arXiv:2311.02660. <br>\n [17] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. arXiv preprint arXiv:2310.01801. <br>\n [18] Eustratiadis, P., Dudziak, Ł., Li, D., & Hospedales, T. (2023). Neural Fine-Tuning Search for Few-Shot Learning. arXiv preprint arXiv:2306.09295. <br>\n [19] Zhang, Y., Zhang, Y., Cui, L., & Fu, G. (2023). Non-autoregressive text editing with copy-aware latent alignments. arXiv preprint arXiv:2310.07821. <br>\n [20] Tu, H., Li, Y., Mi, F., & Yang, Z. (2023). ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue. arXiv preprint arXiv:2305.13602. <br>\n [21] Deng, Y., Zhang, W., Pan, S. J., & Bing, L. (2023). SOUL: Towards Sentiment and Opinion Understanding of Language. arXiv preprint arXiv:2310.17924. <br>\n [22] Singh, G., Ghosh, S., Verma, A., Painkra, C., & Ekbal, A. (2023, December). Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction (DICE) in Multimodal Online Posts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 4517-4532). <br>\n [23] Cao, Q., Kojima, T., Matsuo, Y., & Iwasawa, Y. (2023, December). Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8898-8913). <br>\n [24] Cao, H., Yuan, L., Zhang, Y., & Ng, H. T. (2023, December). Unsupervised Grammatical Error Correction Rivaling Supervised Methods. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 3072-3088). <br>\n [25] Xu, S., Staufer, L., Ichim, O., Heri, C., & Grabmair, M. (2023). Vechr: A dataset for explainable and robust classification of vulnerability type in the european court of human rights. arXiv preprint arXiv:2310.11368. <br>\n [26] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. <br>\n [27] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. <br>\n [28] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. <br>\n [29] Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., & Okruszek, L. (2021). Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. <br>\n [30] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). <br>\n [31] Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. <br>\n [32] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485-5551. <br>\n [33] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. <br>\n [34] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27. <br>\n [35] Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). <br>\n [36] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9). <br>\n [37] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee. <br>\n [38] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. <br>\n [39] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. <br>\n [40] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. <br>\n [41] Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. <br>\n [42] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. <br>\n [43] Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. <br>\n [44] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. <br>\n [45] Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. <br>\n [46] Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing. <br>\n [47] Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. <br>\n [48] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. <br>\n [49] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. <br>\n [50] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). <br>\n</details>" ]
[ "TAGS\n#task_categories-question-answering #language-English #license-mit #arxiv-2402.09910 #region-us \n", "# arXivTection Dataset\nThe arXivTection dataset serves as a benchmark designed for the task of detecting pretraining data from Large Language models.\n\nThe dataset consists of 50 research papers extracted from arXiv. \n- 25 published in 2023: Non-Training data, \"_label_\" column = 0.\n- 25 published before 2022: Training data, \"_label_\" column = 1.\n\nFrom each paper ≈ 30 passages are extracted. Each passage is paraphrased 3 times using the Language Model Claude v2.0. <br>\nThe \"_Answer_\" column indicates which of the passages is the real excerpt.<br>\nPassages are extracted to be on average ≈ 128 tokens in length.\n\n<br>", "# Testing Models on arXivTection\nOur dataset is planned to be used on a Multiple-Choice-Question-Answering format. Nonetheless, it is compatible to be used with other pretraining data detection methods.<br>\nOur GitHub repository contains example scripts to evaluate models on our dataset.\n\n<br>", "# Compatibility\nThe Multiple-Choice-Question-Answering task with our Dataset is designed to be applied to various models, such as:<br>\n- LLaMA-2\n- Mistral\n- Mixtral\n- Chat-GPT (gpt-3.5-turbo-instruct)\n- GPT-3 (text-davinci-003)\n- Claude \n\n<br>", "# Loading the Dataset\n\n\n\n<br>", "# Citation\n\n\n\n<details>\n <summary> Research Papers Used</summary>\n [1] Attanasio, G., Plaza-del-Arco, F. M., Nozza, D., & Lauscher, A. (2023). A Tale of Pronouns: Interpretability Informs Gender Bias Mitigation for Fairer Instruction-Tuned Machine Translation. arXiv preprint arXiv:2310.12127. <br>\n [2] Shi, Y., Wu, L., & Shao, M. (2023). Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling. arXiv preprint arXiv:2310.15294. <br>\n [3] Keleg, A., Goldwater, S., & Magdy, W. (2023). ALDi: Quantifying the arabic level of dialectness of text. arXiv preprint arXiv:2310.13747. <br>\n [4] Su, Y., Ji, Y., Li, J., Ye, H., & Zhang, M. (2023, December). Beware of Model Collapse! Fast and Stable Test-time Adaptation for Robust Question Answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 12998-13011). <br>\n [5] Chang, Y., Lo, K., Goyal, T., & Iyyer, M. (2023). BooookScore: A systematic exploration of book-length summarization in the era of LLMs. arXiv preprint arXiv:2310.00785. <br>\n [6] Karamolegkou, A., Li, J., Zhou, L., & Søgaard, A. (2023). Copyright Violations and Large Language Models. arXiv preprint arXiv:2310.13771. <br>\n [7] Weissweiler, L., Hofmann, V., Kantharuban, A., Cai, A., Dutt, R., Hengle, A., ... & Mortensen, D. R. (2023). Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model. arXiv preprint arXiv:2310.15113. <br>\n [8] Li, Z., & Zhang, Y. (2023, December). Cultural Concept Adaptation on Multimodal Reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 262-276). <br>\n [9] Jiang, W., Mao, Q., Li, J., Lin, C., Yang, W., Deng, T., & Wang, Z. (2023). DisCo: Distilled Student Models Co-training for Semi-supervised Text Mining. arXiv preprint arXiv:2305.12074. <br>\n [10] Zhu, Y., Si, J., Zhao, Y., Zhu, H., Zhou, D., & He, Y. (2023). EXPLAIN, EDIT, GENERATE: Rationale-Sensitive Counterfactual Data Augmentation for Multi-hop Fact Verification. arXiv preprint arXiv:2310.14508. <br>\n [11] Hada, R., Seth, A., Diddee, H., & Bali, K. (2023). ''Fifty Shades of Bias'': Normative Ratings of Gender Bias in GPT Generated English Text. arXiv preprint arXiv:2310.17428. <br>\n [12] Song, Y., & Dhariwal, P. (2023). Improved techniques for training consistency models. arXiv preprint arXiv:2310.14189. <br>\n [13] Xu, W., Wang, D., Pan, L., Song, Z., Freitag, M., Wang, W. Y., & Li, L. (2023). Instructscore: Towards explainable text generation evaluation with automatic feedback. arXiv preprint arXiv:2305.14282. <br>\n [14] Majumder, B. P., He, Z., & McAuley, J. (2022). InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions. arXiv preprint arXiv:2210.07440. <br>\n [15] Yang, Z., Feng, R., Zhang, H., Shen, Y., Zhu, K., Huang, L., ... & Cheng, F. (2023). Eliminating Lipschitz Singularities in Diffusion Models. arXiv preprint arXiv:2306.11251. <br>\n [16] Li, J., Zhang, M., Guo, P., Zhang, M., & Zhang, Y. (2023). LLM-enhanced Self-training for Cross-domain Constituency Parsing. arXiv preprint arXiv:2311.02660. <br>\n [17] Ge, S., Zhang, Y., Liu, L., Zhang, M., Han, J., & Gao, J. (2023). Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs. arXiv preprint arXiv:2310.01801. <br>\n [18] Eustratiadis, P., Dudziak, Ł., Li, D., & Hospedales, T. (2023). Neural Fine-Tuning Search for Few-Shot Learning. arXiv preprint arXiv:2306.09295. <br>\n [19] Zhang, Y., Zhang, Y., Cui, L., & Fu, G. (2023). Non-autoregressive text editing with copy-aware latent alignments. arXiv preprint arXiv:2310.07821. <br>\n [20] Tu, H., Li, Y., Mi, F., & Yang, Z. (2023). ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue. arXiv preprint arXiv:2305.13602. <br>\n [21] Deng, Y., Zhang, W., Pan, S. J., & Bing, L. (2023). SOUL: Towards Sentiment and Opinion Understanding of Language. arXiv preprint arXiv:2310.17924. <br>\n [22] Singh, G., Ghosh, S., Verma, A., Painkra, C., & Ekbal, A. (2023, December). Standardizing Distress Analysis: Emotion-Driven Distress Identification and Cause Extraction (DICE) in Multimodal Online Posts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 4517-4532). <br>\n [23] Cao, Q., Kojima, T., Matsuo, Y., & Iwasawa, Y. (2023, December). Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 8898-8913). <br>\n [24] Cao, H., Yuan, L., Zhang, Y., & Ng, H. T. (2023, December). Unsupervised Grammatical Error Correction Rivaling Supervised Methods. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 3072-3088). <br>\n [25] Xu, S., Staufer, L., Ichim, O., Heri, C., & Grabmair, M. (2023). Vechr: A dataset for explainable and robust classification of vulnerability type in the european court of human rights. arXiv preprint arXiv:2310.11368. <br>\n [26] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. <br>\n [27] Joulin, A., Grave, E., Bojanowski, P., & Mikolov, T. (2016). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. <br>\n [28] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. <br>\n [29] Sarzynska-Wawer, J., Wawer, A., Pawlak, A., Szymanowska, J., Stefaniak, I., Jarkiewicz, M., & Okruszek, L. (2021). Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304, 114135. <br>\n [30] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). <br>\n [31] Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. <br>\n [32] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485-5551. <br>\n [33] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28. <br>\n [34] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27. <br>\n [35] Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). <br>\n [36] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9). <br>\n [37] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee. <br>\n [38] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. <br>\n [39] Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901. <br>\n [40] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. <br>\n [41] Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. <br>\n [42] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. <br>\n [43] Reimers, N., & Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084. <br>\n [44] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. <br>\n [45] Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. <br>\n [46] Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 (pp. 234-241). Springer International Publishing. <br>\n [47] Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. <br>\n [48] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. <br>\n [49] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. <br>\n [50] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). <br>\n</details>" ]
bfa0bd2a484b6505edaaa9dd620fa2f2cfb05df0
# Dataset Card for "full_cleaned_xsum_faith" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mtc/full_cleaned_xsum_faith
[ "region:us" ]
2024-01-30T11:05:32+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "document", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "bbcid", "dtype": "string"}, {"name": "model_name", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "annotations", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 3097533.036, "num_examples": 1247}, {"name": "train", "num_bytes": 2639459.862857143, "num_examples": 1048}, {"name": "validation", "num_bytes": 451054.0, "num_examples": 200}], "download_size": 2120822, "dataset_size": 6188046.898857143}}
2024-01-31T13:09:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for "full_cleaned_xsum_faith" More Information needed
[ "# Dataset Card for \"full_cleaned_xsum_faith\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"full_cleaned_xsum_faith\"\n\nMore Information needed" ]
bb2f6bf341557604d94434b6ede40063ba92d900
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
DjSteker/Electronica
[ "task_categories:question-answering", "language:es", "region:us" ]
2024-01-30T11:08:54+00:00
{"language": ["es"], "task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6413, "num_examples": 13}], "download_size": 10575, "dataset_size": 6413}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T00:31:35+00:00
[]
[ "es" ]
TAGS #task_categories-question-answering #language-Spanish #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-question-answering #language-Spanish #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
164ad1992684d41caed920f5b10b94defea03c48
Hello World
Aur0ra/test
[ "task_categories:text-classification", "size_categories:n<1K", "region:us" ]
2024-01-30T11:58:53+00:00
{"size_categories": ["n<1K"], "task_categories": ["text-classification"]}
2024-01-31T01:43:54+00:00
[]
[]
TAGS #task_categories-text-classification #size_categories-n<1K #region-us
Hello World
[]
[ "TAGS\n#task_categories-text-classification #size_categories-n<1K #region-us \n" ]
b6ee382f5c4f1265110c83d2219b4852dd4af4b5
The **UTRSet-Real** dataset is a comprehensive, manually annotated dataset specifically curated for **Printed Urdu OCR** research. It contains over **11,000** printed text line images, each of which has been meticulously annotated. One of the standout features of this dataset is its remarkable diversity, which includes variations in fonts, text sizes, colours, orientations, lighting conditions, noises, styles, and backgrounds. This diversity closely mirrors real-world scenarios, making the dataset highly suitable for training and evaluating models that aim to excel in real-world Urdu text recognition tasks. The availability of the UTRSet-Real dataset addresses the scarcity of comprehensive real-world printed Urdu OCR datasets. By providing researchers with a valuable resource for developing and benchmarking Urdu OCR models, this dataset promotes standardized evaluation and reproducibility and fosters advancements in the field of Urdu OCR. Further, to complement the UTRSet-Real for training purposes, we also present [**UTRSet-Synth**](https://paperswithcode.com/dataset/utrset-synth), a high-quality synthetic dataset closely resembling real-world representations of Urdu text. For more information and details about the [UTRSet-Real](https://paperswithcode.com/dataset/utrset-real) & [UTRSet-Synth](https://paperswithcode.com/dataset/utrset-synth) datasets, please refer to the paper ["UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"](https://arxiv.org/abs/2306.15782)
abdur75648/UTRSet-Real
[ "task_categories:image-to-text", "language:ur", "license:cc-by-nc-4.0", "ocr", "text recognition", "urdu-ocr", "utrnet", "arxiv:2306.15782", "region:us" ]
2024-01-30T13:32:35+00:00
{"language": ["ur"], "license": "cc-by-nc-4.0", "task_categories": ["image-to-text"], "pretty_name": "UTRSet-Real", "title": "UrduSet-Real (UTRNet)", "emoji": "\ud83d\udcd6", "colorFrom": "red", "colorTo": "green", "tags": ["ocr", "text recognition", "urdu-ocr", "utrnet"], "references": ["https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition", "https://abdur75648.github.io/UTRNet/", "https://arxiv.org/abs/2306.15782"]}
2024-01-30T13:38:29+00:00
[ "2306.15782" ]
[ "ur" ]
TAGS #task_categories-image-to-text #language-Urdu #license-cc-by-nc-4.0 #ocr #text recognition #urdu-ocr #utrnet #arxiv-2306.15782 #region-us
The UTRSet-Real dataset is a comprehensive, manually annotated dataset specifically curated for Printed Urdu OCR research. It contains over 11,000 printed text line images, each of which has been meticulously annotated. One of the standout features of this dataset is its remarkable diversity, which includes variations in fonts, text sizes, colours, orientations, lighting conditions, noises, styles, and backgrounds. This diversity closely mirrors real-world scenarios, making the dataset highly suitable for training and evaluating models that aim to excel in real-world Urdu text recognition tasks. The availability of the UTRSet-Real dataset addresses the scarcity of comprehensive real-world printed Urdu OCR datasets. By providing researchers with a valuable resource for developing and benchmarking Urdu OCR models, this dataset promotes standardized evaluation and reproducibility and fosters advancements in the field of Urdu OCR. Further, to complement the UTRSet-Real for training purposes, we also present UTRSet-Synth, a high-quality synthetic dataset closely resembling real-world representations of Urdu text. For more information and details about the UTRSet-Real & UTRSet-Synth datasets, please refer to the paper "UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"
[]
[ "TAGS\n#task_categories-image-to-text #language-Urdu #license-cc-by-nc-4.0 #ocr #text recognition #urdu-ocr #utrnet #arxiv-2306.15782 #region-us \n" ]
adaf9ba0d2d32c6f34931d149346660e738b006c
The **UTRSet-Synth** dataset is introduced as a complementary training resource to the [**UTRSet-Real** Dataset](https://paperswithcode.com/dataset/utrset-real), specifically designed to enhance the effectiveness of Urdu OCR models. It is a high-quality synthetic dataset comprising 20,000 lines that closely resemble real-world representations of Urdu text. To generate the dataset, a custom-designed synthetic data generation module which offers precise control over variations in crucial factors such as font, text size, colour, resolution, orientation, noise, style, and background, was employed. Moreover, the UTRSet-Synth dataset tackles the limitations observed in existing datasets. It addresses the challenge of standardizing fonts by incorporating over 130 diverse Urdu fonts, which were thoroughly refined to ensure consistent rendering schemes. It overcomes the scarcity of Arabic words, numerals, and Urdu digits by incorporating a significant number of samples representing these elements. Additionally, the dataset is enriched by randomly selecting words from a vocabulary of 100,000 words during the text generation process. As a result, UTRSet-Synth contains a total of 28,187 unique words, with an average word length of 7 characters. The availability of the UTRSet-Synth dataset, a synthetic dataset that closely emulates real-world variations, addresses the scarcity of comprehensive real-world printed Urdu OCR datasets. By providing researchers with a valuable resource for developing and benchmarking Urdu OCR models, this dataset promotes standardized evaluation, and reproducibility, and fosters advancements in the field of Urdu OCR. For more information and details about the [UTRSet-Real](https://paperswithcode.com/dataset/utrset-real) & [UTRSet-Synth](https://paperswithcode.com/dataset/utrset-synth) datasets, please refer to the paper ["UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"](https://arxiv.org/abs/2306.15782)
abdur75648/UTRSet-Synth
[ "task_categories:image-to-text", "language:ur", "license:cc-by-nc-4.0", "ocr", "text recognition", "urdu-ocr", "utrnet", "arxiv:2306.15782", "region:us" ]
2024-01-30T13:40:57+00:00
{"language": ["ur"], "license": "cc-by-nc-4.0", "task_categories": ["image-to-text"], "pretty_name": "UTRSet-Synth", "title": "UrduSet-Synth (UTRNet)", "emoji": "\ud83d\udcd6", "colorFrom": "red", "colorTo": "green", "tags": ["ocr", "text recognition", "urdu-ocr", "utrnet"], "references": ["https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition", "https://abdur75648.github.io/UTRNet/", "https://arxiv.org/abs/2306.15782"]}
2024-01-30T13:42:13+00:00
[ "2306.15782" ]
[ "ur" ]
TAGS #task_categories-image-to-text #language-Urdu #license-cc-by-nc-4.0 #ocr #text recognition #urdu-ocr #utrnet #arxiv-2306.15782 #region-us
The UTRSet-Synth dataset is introduced as a complementary training resource to the UTRSet-Real Dataset, specifically designed to enhance the effectiveness of Urdu OCR models. It is a high-quality synthetic dataset comprising 20,000 lines that closely resemble real-world representations of Urdu text. To generate the dataset, a custom-designed synthetic data generation module which offers precise control over variations in crucial factors such as font, text size, colour, resolution, orientation, noise, style, and background, was employed. Moreover, the UTRSet-Synth dataset tackles the limitations observed in existing datasets. It addresses the challenge of standardizing fonts by incorporating over 130 diverse Urdu fonts, which were thoroughly refined to ensure consistent rendering schemes. It overcomes the scarcity of Arabic words, numerals, and Urdu digits by incorporating a significant number of samples representing these elements. Additionally, the dataset is enriched by randomly selecting words from a vocabulary of 100,000 words during the text generation process. As a result, UTRSet-Synth contains a total of 28,187 unique words, with an average word length of 7 characters. The availability of the UTRSet-Synth dataset, a synthetic dataset that closely emulates real-world variations, addresses the scarcity of comprehensive real-world printed Urdu OCR datasets. By providing researchers with a valuable resource for developing and benchmarking Urdu OCR models, this dataset promotes standardized evaluation, and reproducibility, and fosters advancements in the field of Urdu OCR. For more information and details about the UTRSet-Real & UTRSet-Synth datasets, please refer to the paper "UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"
[]
[ "TAGS\n#task_categories-image-to-text #language-Urdu #license-cc-by-nc-4.0 #ocr #text recognition #urdu-ocr #utrnet #arxiv-2306.15782 #region-us \n" ]
6ed038a4f842a0239d35a78648bf9a805ef862a6
The **UrduDoc Dataset** is a benchmark dataset for Urdu text line detection in scanned documents. It is created as a byproduct of the **UTRSet-Real** dataset generation process. Comprising 478 diverse images collected from various sources such as books, documents, manuscripts, and newspapers, it offers a valuable resource for research in Urdu document analysis. It includes 358 pages for training and 120 pages for validation, featuring a wide range of styles, scales, and lighting conditions. It serves as a benchmark for evaluating printed Urdu text detection models, and the benchmark results of state-of-the-art models are provided. The Contour-Net model demonstrates the best performance in terms of h-mean. The UrduDoc dataset is the first of its kind for printed Urdu text line detection and will advance research in the field. It will be made publicly available for non-commercial, academic, and research purposes upon request and execution of a no-cost license agreement. To request the dataset and for more information and details about the [UrduDoc ](https://paperswithcode.com/dataset/urdudoc), [UTRSet-Real](https://paperswithcode.com/dataset/utrset-real) & [UTRSet-Synth](https://paperswithcode.com/dataset/utrset-synth) datasets, please refer to the [Project Website](https://abdur75648.github.io/UTRNet/) of our paper ["UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"](https://arxiv.org/abs/2306.15782)
abdur75648/UrduDoc
[ "task_categories:image-to-text", "language:ur", "license:cc-by-nc-4.0", "ocr", "text recognition", "urdu-ocr", "utrnet", "arxiv:2306.15782", "region:us" ]
2024-01-30T13:42:57+00:00
{"language": ["ur"], "license": "cc-by-nc-4.0", "task_categories": ["image-to-text"], "pretty_name": "UrduDoc", "title": "UrduDoc (UTRNet)", "emoji": "\ud83d\udcd6", "colorFrom": "red", "colorTo": "green", "tags": ["ocr", "text recognition", "urdu-ocr", "utrnet"], "references": ["https://github.com/abdur75648/UTRNet-High-Resolution-Urdu-Text-Recognition", "https://abdur75648.github.io/UTRNet/", "https://arxiv.org/abs/2306.15782"]}
2024-01-30T13:43:44+00:00
[ "2306.15782" ]
[ "ur" ]
TAGS #task_categories-image-to-text #language-Urdu #license-cc-by-nc-4.0 #ocr #text recognition #urdu-ocr #utrnet #arxiv-2306.15782 #region-us
The UrduDoc Dataset is a benchmark dataset for Urdu text line detection in scanned documents. It is created as a byproduct of the UTRSet-Real dataset generation process. Comprising 478 diverse images collected from various sources such as books, documents, manuscripts, and newspapers, it offers a valuable resource for research in Urdu document analysis. It includes 358 pages for training and 120 pages for validation, featuring a wide range of styles, scales, and lighting conditions. It serves as a benchmark for evaluating printed Urdu text detection models, and the benchmark results of state-of-the-art models are provided. The Contour-Net model demonstrates the best performance in terms of h-mean. The UrduDoc dataset is the first of its kind for printed Urdu text line detection and will advance research in the field. It will be made publicly available for non-commercial, academic, and research purposes upon request and execution of a no-cost license agreement. To request the dataset and for more information and details about the UrduDoc , UTRSet-Real & UTRSet-Synth datasets, please refer to the Project Website of our paper "UTRNet: High-Resolution Urdu Text Recognition In Printed Documents"
[]
[ "TAGS\n#task_categories-image-to-text #language-Urdu #license-cc-by-nc-4.0 #ocr #text recognition #urdu-ocr #utrnet #arxiv-2306.15782 #region-us \n" ]
e5908da54d18f4578036446098d7f806b35b1b17
# Dataset Card for "translations" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DerErikP/translations
[ "region:us" ]
2024-01-30T13:52:43+00:00
{"dataset_info": {"features": [{"name": "do_translation", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56974, "num_examples": 100}], "download_size": 0, "dataset_size": 56974}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-30T13:53:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "translations" More Information needed
[ "# Dataset Card for \"translations\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"translations\"\n\nMore Information needed" ]
88be99ea8468003ad117d176d5d03a554f865397
Only one out of four images is a real photopgraph. --- dataset_info: features: - name: idx dtype: int32 - name: image1 dtype: image - name: image2 dtype: image - name: image3 dtype: image - name: image4 dtype: image - name: choices sequence: string - name: image1_label dtype: class_label: names: '0': fake '1': real - name: image2_label dtype: class_label: names: '0': fake '1': real - name: image3_label dtype: class_label: names: '0': fake '1': real - name: image4_label dtype: class_label: names: '0': fake '1': real - name: answer dtype: string splits: - name: test num_bytes: 22789253.0 num_examples: 153 download_size: 22786516 dataset_size: 22789253.0 configs: - config_name: default data_files: - split: test path: data/test-* ---
PerceptionEval/Realness
[ "region:us" ]
2024-01-30T13:58:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "idx", "dtype": "int32"}, {"name": "image1", "dtype": "image"}, {"name": "image2", "dtype": "image"}, {"name": "image3", "dtype": "image"}, {"name": "image4", "dtype": "image"}, {"name": "choices", "sequence": "string"}, {"name": "image1_label", "dtype": {"class_label": {"names": {"0": "fake", "1": "real"}}}}, {"name": "image2_label", "dtype": {"class_label": {"names": {"0": "fake", "1": "real"}}}}, {"name": "image3_label", "dtype": {"class_label": {"names": {"0": "fake", "1": "real"}}}}, {"name": "image4_label", "dtype": {"class_label": {"names": {"0": "fake", "1": "real"}}}}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 22199012.0, "num_examples": 150}, {"name": "val", "num_bytes": 22789253.0, "num_examples": 153}], "download_size": 44980808, "dataset_size": 44988265.0}}
2024-02-01T19:58:20+00:00
[]
[]
TAGS #region-us
Only one out of four images is a real photopgraph. --- dataset_info: features: - name: idx dtype: int32 - name: image1 dtype: image - name: image2 dtype: image - name: image3 dtype: image - name: image4 dtype: image - name: choices sequence: string - name: image1_label dtype: class_label: names: '0': fake '1': real - name: image2_label dtype: class_label: names: '0': fake '1': real - name: image3_label dtype: class_label: names: '0': fake '1': real - name: image4_label dtype: class_label: names: '0': fake '1': real - name: answer dtype: string splits: - name: test num_bytes: 22789253.0 num_examples: 153 download_size: 22786516 dataset_size: 22789253.0 configs: - config_name: default data_files: - split: test path: data/test-* ---
[]
[ "TAGS\n#region-us \n" ]
0c4a387c311a32e7a2ff1c47a37b5f40e237b5ee
# Dataset Card for "Train_VIVOS_CommonVoice_FOSD_Control_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tuanmanh28/Train_VIVOS_CommonVoice_FOSD_Control_dataset
[ "region:us" ]
2024-01-30T14:22:13+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "input_values", "sequence": "float32"}, {"name": "input_length", "dtype": "int64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 15853252317.123964, "num_examples": 39529}, {"name": "val", "num_bytes": 2016239876.125, "num_examples": 5631}], "download_size": 15873864990, "dataset_size": 17869492193.248962}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}]}
2024-01-30T15:44:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Train_VIVOS_CommonVoice_FOSD_Control_dataset" More Information needed
[ "# Dataset Card for \"Train_VIVOS_CommonVoice_FOSD_Control_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Train_VIVOS_CommonVoice_FOSD_Control_dataset\"\n\nMore Information needed" ]
02062730d35617f190377994934c98ec5e477c38
# MIMIR These datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models. ## 📌 Applicability The datasets can be applied to any model trained on The Pile, including (but not limited to): - GPTNeo - Pythia - OPT ## Loading the datasets To load the dataset: ```python from datasets import load_dataset dataset = load_dataset("iamgroot42/mimir", "pile_cc", split="ngram_7_0.2") ``` - Available Names: `arxiv`, `dm_mathematics`, `github`, `hackernews`, `pile_cc`, `pubmed_central`, `wikipedia_(en)`, `full_pile`, `c4`, `temporal_arxiv`, `temporal_wiki` - Available Splits: `ngram_7_0.2`, `ngram_13_0.2`, `ngram_13_0.8` (for most sources), 'none' (for other sources) - Available Features: `member` (str), `nonmember` (str), `member_neighbors` (List[str]), `nonmember_neighbors` (List[str]) ## 🛠️ Codebase For evaluating MIA methods on our datasets, visit our [GitHub repository](http://github.com/iamgroot42/mimir). ## ⭐ Citing our Work If you find our codebase and datasets beneficial, kindly cite [our work](https://arxiv.org/pdf/2402.07841.pdf): ```bibtex @article{duan2024membership, title={Do Membership Inference Attacks Work on Large Language Models?}, author={Michael Duan and Anshuman Suri and Niloofar Mireshghallah and Sewon Min and Weijia Shi and Luke Zettlemoyer and Yulia Tsvetkov and Yejin Choi and David Evans and Hannaneh Hajishirzi}, year={2024}, journal={arXiv:2402.07841}, } ```
iamgroot42/mimir
[ "size_categories:1K<n<10K", "language:en", "license:mit", "membership inference", "privacy", "arxiv:2402.07841", "region:us" ]
2024-01-30T14:27:16+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "pretty_name": "MIMIR", "tags": ["membership inference", "privacy"]}
2024-02-15T14:32:03+00:00
[ "2402.07841" ]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-mit #membership inference #privacy #arxiv-2402.07841 #region-us
# MIMIR These datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models. ## Applicability The datasets can be applied to any model trained on The Pile, including (but not limited to): - GPTNeo - Pythia - OPT ## Loading the datasets To load the dataset: - Available Names: 'arxiv', 'dm_mathematics', 'github', 'hackernews', 'pile_cc', 'pubmed_central', 'wikipedia_(en)', 'full_pile', 'c4', 'temporal_arxiv', 'temporal_wiki' - Available Splits: 'ngram_7_0.2', 'ngram_13_0.2', 'ngram_13_0.8' (for most sources), 'none' (for other sources) - Available Features: 'member' (str), 'nonmember' (str), 'member_neighbors' (List[str]), 'nonmember_neighbors' (List[str]) ## ️ Codebase For evaluating MIA methods on our datasets, visit our GitHub repository. ## ⭐ Citing our Work If you find our codebase and datasets beneficial, kindly cite our work:
[ "# MIMIR\n\nThese datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models.", "## Applicability\n\nThe datasets can be applied to any model trained on The Pile, including (but not limited to):\n- GPTNeo\n- Pythia\n- OPT", "## Loading the datasets\nTo load the dataset:\n\n\n\n- Available Names: 'arxiv', 'dm_mathematics', 'github', 'hackernews', 'pile_cc', 'pubmed_central', 'wikipedia_(en)', 'full_pile', 'c4', 'temporal_arxiv', 'temporal_wiki'\n- Available Splits: 'ngram_7_0.2', 'ngram_13_0.2', 'ngram_13_0.8' (for most sources), 'none' (for other sources)\n- Available Features: 'member' (str), 'nonmember' (str), 'member_neighbors' (List[str]), 'nonmember_neighbors' (List[str])", "## ️ Codebase\nFor evaluating MIA methods on our datasets, visit our GitHub repository.", "## ⭐ Citing our Work\n\nIf you find our codebase and datasets beneficial, kindly cite our work:" ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-mit #membership inference #privacy #arxiv-2402.07841 #region-us \n", "# MIMIR\n\nThese datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models.", "## Applicability\n\nThe datasets can be applied to any model trained on The Pile, including (but not limited to):\n- GPTNeo\n- Pythia\n- OPT", "## Loading the datasets\nTo load the dataset:\n\n\n\n- Available Names: 'arxiv', 'dm_mathematics', 'github', 'hackernews', 'pile_cc', 'pubmed_central', 'wikipedia_(en)', 'full_pile', 'c4', 'temporal_arxiv', 'temporal_wiki'\n- Available Splits: 'ngram_7_0.2', 'ngram_13_0.2', 'ngram_13_0.8' (for most sources), 'none' (for other sources)\n- Available Features: 'member' (str), 'nonmember' (str), 'member_neighbors' (List[str]), 'nonmember_neighbors' (List[str])", "## ️ Codebase\nFor evaluating MIA methods on our datasets, visit our GitHub repository.", "## ⭐ Citing our Work\n\nIf you find our codebase and datasets beneficial, kindly cite our work:" ]