sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
a2edf6a4a9588b3e81830cac3bd8659e12bdf8a2
# Mario Maker 2 level plays Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 level plays dataset consists of 1 billion level plays from Nintendo's online service totaling around 20GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_level_played", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 3000004, 'pid': '6382913755133534321', 'cleared': 1, 'liked': 0 } ``` Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player. You can also download the full dataset. Note that this will download ~20GB: ```python ds = load_dataset("TheGreatRambler/mm2_level_played", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 3000004, 'pid': '6382913755133534321', 'cleared': 1, 'liked': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this play occured in| |pid|string|Player ID of the player| |cleared|bool|Whether the player cleared the level during their play| |liked|bool|Whether the player liked the level during their play| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_level_played
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:1B<n<10B", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:17:04+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 level plays", "tags": ["text-mining"]}
2022-11-11T08:05:36+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1B<n<10B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 level plays ========================= Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 level plays dataset consists of 1 billion level plays from Nintendo's online service totaling around 20GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique play in the level denoted by the 'data\_id' done by the player denoted by the 'pid', 'pid' is a 64 bit integer stored within a string from database limitations. 'cleared' and 'liked' denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player. You can also download the full dataset. Note that this will download ~20GB: Data Structure -------------- ### Data Instances ### Data Fields Field: data\_id, Type: int, Description: The data ID of the level this play occured in Field: pid, Type: string, Description: Player ID of the player Field: cleared, Type: bool, Description: Whether the player cleared the level during their play Field: liked, Type: bool, Description: Whether the player liked the level during their play ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 level plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique play in the level denoted by the 'data\\_id' done by the player denoted by the 'pid', 'pid' is a 64 bit integer stored within a string from database limitations. 'cleared' and 'liked' denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.\n\n\nYou can also download the full dataset. Note that this will download ~20GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of the level this play occured in\nField: pid, Type: string, Description: Player ID of the player\nField: cleared, Type: bool, Description: Whether the player cleared the level during their play\nField: liked, Type: bool, Description: Whether the player liked the level during their play", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1B<n<10B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 level plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique play in the level denoted by the 'data\\_id' done by the player denoted by the 'pid', 'pid' is a 64 bit integer stored within a string from database limitations. 'cleared' and 'liked' denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.\n\n\nYou can also download the full dataset. Note that this will download ~20GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of the level this play occured in\nField: pid, Type: string, Description: Player ID of the player\nField: cleared, Type: bool, Description: Whether the player cleared the level during their play\nField: liked, Type: bool, Description: Whether the player liked the level during their play", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
1f06c2b8cd09144b775cd328ed16b2033275cdc8
# Mario Maker 2 level deaths Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 level deaths dataset consists of 564 million level deaths from Nintendo's online service totaling around 2.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level deaths dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_level_deaths", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 3000382, 'x': 696, 'y': 0, 'is_subworld': 0 } ``` Each row is a unique death in the level denoted by the `data_id` that occurs at the provided coordinates. `is_subworld` denotes whether the death happened in the main world or the subworld. You can also download the full dataset. Note that this will download ~2.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_level_deaths", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 3000382, 'x': 696, 'y': 0, 'is_subworld': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this death occured in| |x|int|X coordinate of death| |y|int|Y coordinate of death| |is_subworld|bool|Whether the death happened in the main world or the subworld| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_level_deaths
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:100M<n<1B", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:17:18+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 level deaths", "tags": ["text-mining"]}
2022-11-11T08:05:52+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 level deaths ========================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 level deaths dataset consists of 564 million level deaths from Nintendo's online service totaling around 2.5GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 level deaths dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique death in the level denoted by the 'data\_id' that occurs at the provided coordinates. 'is\_subworld' denotes whether the death happened in the main world or the subworld. You can also download the full dataset. Note that this will download ~2.5GB: Data Structure -------------- ### Data Instances ### Data Fields Field: data\_id, Type: int, Description: The data ID of the level this death occured in Field: x, Type: int, Description: X coordinate of death Field: y, Type: int, Description: Y coordinate of death Field: is\_subworld, Type: bool, Description: Whether the death happened in the main world or the subworld ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 level deaths dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique death in the level denoted by the 'data\\_id' that occurs at the provided coordinates. 'is\\_subworld' denotes whether the death happened in the main world or the subworld.\n\n\nYou can also download the full dataset. Note that this will download ~2.5GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of the level this death occured in\nField: x, Type: int, Description: X coordinate of death\nField: y, Type: int, Description: Y coordinate of death\nField: is\\_subworld, Type: bool, Description: Whether the death happened in the main world or the subworld", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 level deaths dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique death in the level denoted by the 'data\\_id' that occurs at the provided coordinates. 'is\\_subworld' denotes whether the death happened in the main world or the subworld.\n\n\nYou can also download the full dataset. Note that this will download ~2.5GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of the level this death occured in\nField: x, Type: int, Description: X coordinate of death\nField: y, Type: int, Description: Y coordinate of death\nField: is\\_subworld, Type: bool, Description: Whether the death happened in the main world or the subworld", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
0c95c15ed4e4ea278f0fbd57475381eae14eca2b
# Mario Maker 2 users Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 users dataset consists of 6 million users from Nintendo's online service totaling around 1.2GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14608829447232141607', 'data_id': 1, 'region': 0, 'name': 'げんまい', 'country': 'JP', 'last_active': 1578384457, 'mii_data': [some binary data], 'mii_image': '000f165d6574777a7881949e9da1acc1cac7cacad3dad9e0eff2f9faf900430a151c25384258637084878e8b96a0b0', 'pose': 0, 'hat': 0, 'shirt': 0, 'pants': 0, 'wearing_outfit': 0, 'courses_played': 12, 'courses_cleared': 10, 'courses_attempted': 23, 'courses_deaths': 13, 'likes': 0, 'maker_points': 0, 'easy_highscore': 0, 'normal_highscore': 0, 'expert_highscore': 0, 'super_expert_highscore': 0, 'versus_rating': 0, 'versus_rank': 1, 'versus_won': 0, 'versus_lost': 1, 'versus_win_streak': 0, 'versus_lose_streak': 1, 'versus_plays': 1, 'versus_disconnected': 0, 'coop_clears': 1, 'coop_plays': 1, 'recent_performance': 1383, 'versus_kills': 0, 'versus_killed_by_others': 0, 'multiplayer_unk13': 286, 'multiplayer_unk14': 5999927, 'first_clears': 0, 'world_records': 0, 'unique_super_world_clears': 0, 'uploaded_levels': 0, 'maximum_uploaded_levels': 100, 'weekly_maker_points': 0, 'last_uploaded_level': 1561555201, 'is_nintendo_employee': 0, 'comments_enabled': 1, 'tags_enabled': 0, 'super_world_id': '', 'unk3': 0, 'unk12': 0, 'unk16': 0 } ``` Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player. Each row is a unique user associated denoted by the `pid`. `data_id` is not used by Nintendo but, like levels, it counts up sequentially and can be used to determine account age. `mii_data` is a `charinfo` type Switch Mii. `mii_image` can be used with Nintendo's online studio API to generate images: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user", streaming=True, split="train") mii_image = next(iter(ds))["mii_image"] print("Face: https://studio.mii.nintendo.com/miis/image.png?data=%s&type=face&width=512&instanceCount=1" % mii_image) print("Body: https://studio.mii.nintendo.com/miis/image.png?data=%s&type=all_body&width=512&instanceCount=1" % mii_image) print("Face (x16): https://studio.mii.nintendo.com/miis/image.png?data=%s&type=face&width=512&instanceCount=16" % mii_image) print("Body (x16): https://studio.mii.nintendo.com/miis/image.png?data=%s&type=all_body&width=512&instanceCount=16" % mii_image) ``` `pose`, `hat`, `shirt` and `pants` has associated enums described below. `last_active` and `last_uploaded_level` are UTC timestamps. `super_world_id`, if not empty, provides the ID of a super world in `TheGreatRambler/mm2_world`. You can also download the full dataset. Note that this will download ~1.2GB: ```python ds = load_dataset("TheGreatRambler/mm2_user", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14608829447232141607', 'data_id': 1, 'region': 0, 'name': 'げんまい', 'country': 'JP', 'last_active': 1578384457, 'mii_data': [some binary data], 'mii_image': '000f165d6574777a7881949e9da1acc1cac7cacad3dad9e0eff2f9faf900430a151c25384258637084878e8b96a0b0', 'pose': 0, 'hat': 0, 'shirt': 0, 'pants': 0, 'wearing_outfit': 0, 'courses_played': 12, 'courses_cleared': 10, 'courses_attempted': 23, 'courses_deaths': 13, 'likes': 0, 'maker_points': 0, 'easy_highscore': 0, 'normal_highscore': 0, 'expert_highscore': 0, 'super_expert_highscore': 0, 'versus_rating': 0, 'versus_rank': 1, 'versus_won': 0, 'versus_lost': 1, 'versus_win_streak': 0, 'versus_lose_streak': 1, 'versus_plays': 1, 'versus_disconnected': 0, 'coop_clears': 1, 'coop_plays': 1, 'recent_performance': 1383, 'versus_kills': 0, 'versus_killed_by_others': 0, 'multiplayer_unk13': 286, 'multiplayer_unk14': 5999927, 'first_clears': 0, 'world_records': 0, 'unique_super_world_clears': 0, 'uploaded_levels': 0, 'maximum_uploaded_levels': 100, 'weekly_maker_points': 0, 'last_uploaded_level': 1561555201, 'is_nintendo_employee': 0, 'comments_enabled': 1, 'tags_enabled': 0, 'super_world_id': '', 'unk3': 0, 'unk12': 0, 'unk16': 0 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of this user, while not used internally user codes are generated using this| |region|int|User region, enum below| |name|string|User name| |country|string|User country as a 2 letter ALPHA-2 code| |last_active|int|UTC timestamp of when this user was last active, not known what constitutes active| |mii_data|bytes|The CHARINFO blob of this user's Mii| |mii_image|string|A string that can be fed into Nintendo's studio API to generate an image| |pose|int|Pose, enum below| |hat|int|Hat, enum below| |shirt|int|Shirt, enum below| |pants|int|Pants, enum below| |wearing_outfit|bool|Whether this user is wearing pants| |courses_played|int|How many courses this user has played| |courses_cleared|int|How many courses this user has cleared| |courses_attempted|int|How many courses this user has attempted| |courses_deaths|int|How many times this user has died| |likes|int|How many likes this user has recieved| |maker_points|int|Maker points| |easy_highscore|int|Easy highscore| |normal_highscore|int|Normal highscore| |expert_highscore|int|Expert highscore| |super_expert_highscore|int|Super expert high score| |versus_rating|int|Versus rating| |versus_rank|int|Versus rank, enum below| |versus_won|int|How many courses this user has won in versus| |versus_lost|int|How many courses this user has lost in versus| |versus_win_streak|int|Versus win streak| |versus_lose_streak|int|Versus lose streak| |versus_plays|int|Versus plays| |versus_disconnected|int|Times user has disconnected in versus| |coop_clears|int|Coop clears| |coop_plays|int|Coop plays| |recent_performance|int|Unknown variable relating to versus performance| |versus_kills|int|Kills in versus, unknown what activities constitute a kill| |versus_killed_by_others|int|Deaths in versus from other users, little is known about what activities constitute a death| |multiplayer_unk13|int|Unknown, relating to multiplayer| |multiplayer_unk14|int|Unknown, relating to multiplayer| |first_clears|int|First clears| |world_records|int|World records| |unique_super_world_clears|int|Super world clears| |uploaded_levels|int|Number of uploaded levels| |maximum_uploaded_levels|int|Maximum number of levels this user may upload| |weekly_maker_points|int|Weekly maker points| |last_uploaded_level|int|UTC timestamp of when this user last uploaded a level| |is_nintendo_employee|bool|Whether this user is an official Nintendo account| |comments_enabled|bool|Whether this user has comments enabled on their levels| |tags_enabled|bool|Whether this user has tags enabled on their levels| |super_world_id|string|The ID of this user's super world, blank if they do not have one| |unk3|int|Unknown| |unk12|int|Unknown| |unk16|int|Unknown| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python Regions = { 0: "Asia", 1: "Americas", 2: "Europe", 3: "Other" } MultiplayerVersusRanks = { 1: "D", 2: "C", 3: "B", 4: "A", 5: "S", 6: "S+" } UserPose = { 0: "Normal", 15: "Fidgety", 17: "Annoyed", 18: "Buoyant", 19: "Thrilled", 20: "Let's go!", 21: "Hello!", 29: "Show-Off", 31: "Cutesy", 39: "Hyped!" } UserHat = { 0: "None", 1: "Mario Cap", 2: "Luigi Cap", 4: "Mushroom Hairclip", 5: "Bowser Headpiece", 8: "Princess Peach Wig", 11: "Builder Hard Hat", 12: "Bowser Jr. Headpiece", 13: "Pipe Hat", 15: "Cat Mario Headgear", 16: "Propeller Mario Helmet", 17: "Cheep Cheep Hat", 18: "Yoshi Hat", 21: "Faceplant", 22: "Toad Cap", 23: "Shy Cap", 24: "Magikoopa Hat", 25: "Fancy Top Hat", 26: "Doctor Headgear", 27: "Rocky Wrench Manhold Lid", 28: "Super Star Barrette", 29: "Rosalina Wig", 30: "Fried-Chicken Headgear", 31: "Royal Crown", 32: "Edamame Barrette", 33: "Superball Mario Hat", 34: "Robot Cap", 35: "Frog Cap", 36: "Cheetah Headgear", 37: "Ninji Cap", 38: "Super Acorn Hat", 39: "Pokey Hat", 40: "Snow Pokey Hat" } UserShirt = { 0: "Nintendo Shirt", 1: "Mario Outfit", 2: "Luigi Outfit", 3: "Super Mushroom Shirt", 5: "Blockstripe Shirt", 8: "Bowser Suit", 12: "Builder Mario Outfit", 13: "Princess Peach Dress", 16: "Nintendo Uniform", 17: "Fireworks Shirt", 19: "Refreshing Shirt", 21: "Reset Dress", 22: "Thwomp Suit", 23: "Slobbery Shirt", 26: "Cat Suit", 27: "Propeller Mario Clothes", 28: "Banzai Bill Shirt", 29: "Staredown Shirt", 31: "Yoshi Suit", 33: "Midnight Dress", 34: "Magikoopa Robes", 35: "Doctor Coat", 37: "Chomp-Dog Shirt", 38: "Fish Bone Shirt", 40: "Toad Outfit", 41: "Googoo Onesie", 42: "Matrimony Dress", 43: "Fancy Tuxedo", 44: "Koopa Troopa Suit", 45: "Laughing Shirt", 46: "Running Shirt", 47: "Rosalina Dress", 49: "Angry Sun Shirt", 50: "Fried-Chicken Hoodie", 51: "? Block Hoodie", 52: "Edamame Camisole", 53: "I-Like-You Camisole", 54: "White Tanktop", 55: "Hot Hot Shirt", 56: "Royal Attire", 57: "Superball Mario Suit", 59: "Partrick Shirt", 60: "Robot Suit", 61: "Superb Suit", 62: "Yamamura Shirt", 63: "Princess Peach Tennis Outfit", 64: "1-Up Hoodie", 65: "Cheetah Tanktop", 66: "Cheetah Suit", 67: "Ninji Shirt", 68: "Ninji Garb", 69: "Dash Block Hoodie", 70: "Fire Mario Shirt", 71: "Raccoon Mario Shirt", 72: "Cape Mario Shirt", 73: "Flying Squirrel Mario Shirt", 74: "Cat Mario Shirt", 75: "World Wear", 76: "Koopaling Hawaiian Shirt", 77: "Frog Mario Raincoat", 78: "Phanto Hoodie" } UserPants = { 0: "Black Short-Shorts", 1: "Denim Jeans", 5: "Denim Skirt", 8: "Pipe Skirt", 9: "Skull Skirt", 10: "Burner Skirt", 11: "Cloudwalker", 12: "Platform Skirt", 13: "Parent-and-Child Skirt", 17: "Mario Swim Trunks", 22: "Wind-Up Shoe", 23: "Hoverclown", 24: "Big-Spender Shorts", 25: "Shorts of Doom!", 26: "Doorduroys", 27: "Antsy Corduroys", 28: "Bouncy Skirt", 29: "Stingby Skirt", 31: "Super Star Flares", 32: "Cheetah Runners", 33: "Ninji Slacks" } # Checked against user's shirt UserIsOutfit = { 0: False, 1: True, 2: True, 3: False, 5: False, 8: True, 12: True, 13: True, 16: False, 17: False, 19: False, 21: True, 22: True, 23: False, 26: True, 27: True, 28: False, 29: False, 31: True, 33: True, 34: True, 35: True, 37: False, 38: False, 40: True, 41: True, 42: True, 43: True, 44: True, 45: False, 46: False, 47: True, 49: False, 50: False, 51: False, 52: False, 53: False, 54: False, 55: False, 56: True, 57: True, 59: False, 60: True, 61: True, 62: False, 63: True, 64: False, 65: False, 66: True, 67: False, 68: True, 69: False, 70: False, 71: False, 72: False, 73: False, 74: False, 75: True, 76: False, 77: True, 78: False } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset consists of many different Mario Maker 2 players globally and as such their names could contain harmful language. Harmful depictions could also be present in their Miis, should you choose to render it.
TheGreatRambler/mm2_user
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:17:35+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 users", "tags": ["text-mining"]}
2022-11-11T08:04:51+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 users =================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 users dataset consists of 6 million users from Nintendo's online service totaling around 1.2GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique play in the level denoted by the 'data\_id' done by the player denoted by the 'pid', 'pid' is a 64 bit integer stored within a string from database limitations. 'cleared' and 'liked' denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player. Each row is a unique user associated denoted by the 'pid'. 'data\_id' is not used by Nintendo but, like levels, it counts up sequentially and can be used to determine account age. 'mii\_data' is a 'charinfo' type Switch Mii. 'mii\_image' can be used with Nintendo's online studio API to generate images: 'pose', 'hat', 'shirt' and 'pants' has associated enums described below. 'last\_active' and 'last\_uploaded\_level' are UTC timestamps. 'super\_world\_id', if not empty, provides the ID of a super world in 'TheGreatRambler/mm2\_world'. You can also download the full dataset. Note that this will download ~1.2GB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string Field: data\_id, Type: int, Description: The data ID of this user, while not used internally user codes are generated using this Field: region, Type: int, Description: User region, enum below Field: name, Type: string, Description: User name Field: country, Type: string, Description: User country as a 2 letter ALPHA-2 code Field: last\_active, Type: int, Description: UTC timestamp of when this user was last active, not known what constitutes active Field: mii\_data, Type: bytes, Description: The CHARINFO blob of this user's Mii Field: mii\_image, Type: string, Description: A string that can be fed into Nintendo's studio API to generate an image Field: pose, Type: int, Description: Pose, enum below Field: hat, Type: int, Description: Hat, enum below Field: shirt, Type: int, Description: Shirt, enum below Field: pants, Type: int, Description: Pants, enum below Field: wearing\_outfit, Type: bool, Description: Whether this user is wearing pants Field: courses\_played, Type: int, Description: How many courses this user has played Field: courses\_cleared, Type: int, Description: How many courses this user has cleared Field: courses\_attempted, Type: int, Description: How many courses this user has attempted Field: courses\_deaths, Type: int, Description: How many times this user has died Field: likes, Type: int, Description: How many likes this user has recieved Field: maker\_points, Type: int, Description: Maker points Field: easy\_highscore, Type: int, Description: Easy highscore Field: normal\_highscore, Type: int, Description: Normal highscore Field: expert\_highscore, Type: int, Description: Expert highscore Field: super\_expert\_highscore, Type: int, Description: Super expert high score Field: versus\_rating, Type: int, Description: Versus rating Field: versus\_rank, Type: int, Description: Versus rank, enum below Field: versus\_won, Type: int, Description: How many courses this user has won in versus Field: versus\_lost, Type: int, Description: How many courses this user has lost in versus Field: versus\_win\_streak, Type: int, Description: Versus win streak Field: versus\_lose\_streak, Type: int, Description: Versus lose streak Field: versus\_plays, Type: int, Description: Versus plays Field: versus\_disconnected, Type: int, Description: Times user has disconnected in versus Field: coop\_clears, Type: int, Description: Coop clears Field: coop\_plays, Type: int, Description: Coop plays Field: recent\_performance, Type: int, Description: Unknown variable relating to versus performance Field: versus\_kills, Type: int, Description: Kills in versus, unknown what activities constitute a kill Field: versus\_killed\_by\_others, Type: int, Description: Deaths in versus from other users, little is known about what activities constitute a death Field: multiplayer\_unk13, Type: int, Description: Unknown, relating to multiplayer Field: multiplayer\_unk14, Type: int, Description: Unknown, relating to multiplayer Field: first\_clears, Type: int, Description: First clears Field: world\_records, Type: int, Description: World records Field: unique\_super\_world\_clears, Type: int, Description: Super world clears Field: uploaded\_levels, Type: int, Description: Number of uploaded levels Field: maximum\_uploaded\_levels, Type: int, Description: Maximum number of levels this user may upload Field: weekly\_maker\_points, Type: int, Description: Weekly maker points Field: last\_uploaded\_level, Type: int, Description: UTC timestamp of when this user last uploaded a level Field: is\_nintendo\_employee, Type: bool, Description: Whether this user is an official Nintendo account Field: comments\_enabled, Type: bool, Description: Whether this user has comments enabled on their levels Field: tags\_enabled, Type: bool, Description: Whether this user has tags enabled on their levels Field: super\_world\_id, Type: string, Description: The ID of this user's super world, blank if they do not have one Field: unk3, Type: int, Description: Unknown Field: unk12, Type: int, Description: Unknown Field: unk16, Type: int, Description: Unknown ### Data Splits The dataset only contains a train split. Enums ----- The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset consists of many different Mario Maker 2 players globally and as such their names could contain harmful language. Harmful depictions could also be present in their Miis, should you choose to render it.
[ "### How to use it\n\n\nThe Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique play in the level denoted by the 'data\\_id' done by the player denoted by the 'pid', 'pid' is a 64 bit integer stored within a string from database limitations. 'cleared' and 'liked' denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.\n\n\nEach row is a unique user associated denoted by the 'pid'. 'data\\_id' is not used by Nintendo but, like levels, it counts up sequentially and can be used to determine account age. 'mii\\_data' is a 'charinfo' type Switch Mii. 'mii\\_image' can be used with Nintendo's online studio API to generate images:\n\n\n'pose', 'hat', 'shirt' and 'pants' has associated enums described below. 'last\\_active' and 'last\\_uploaded\\_level' are UTC timestamps. 'super\\_world\\_id', if not empty, provides the ID of a super world in 'TheGreatRambler/mm2\\_world'.\n\n\nYou can also download the full dataset. Note that this will download ~1.2GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of this user, while not used internally user codes are generated using this\nField: region, Type: int, Description: User region, enum below\nField: name, Type: string, Description: User name\nField: country, Type: string, Description: User country as a 2 letter ALPHA-2 code\nField: last\\_active, Type: int, Description: UTC timestamp of when this user was last active, not known what constitutes active\nField: mii\\_data, Type: bytes, Description: The CHARINFO blob of this user's Mii\nField: mii\\_image, Type: string, Description: A string that can be fed into Nintendo's studio API to generate an image\nField: pose, Type: int, Description: Pose, enum below\nField: hat, Type: int, Description: Hat, enum below\nField: shirt, Type: int, Description: Shirt, enum below\nField: pants, Type: int, Description: Pants, enum below\nField: wearing\\_outfit, Type: bool, Description: Whether this user is wearing pants\nField: courses\\_played, Type: int, Description: How many courses this user has played\nField: courses\\_cleared, Type: int, Description: How many courses this user has cleared\nField: courses\\_attempted, Type: int, Description: How many courses this user has attempted\nField: courses\\_deaths, Type: int, Description: How many times this user has died\nField: likes, Type: int, Description: How many likes this user has recieved\nField: maker\\_points, Type: int, Description: Maker points\nField: easy\\_highscore, Type: int, Description: Easy highscore\nField: normal\\_highscore, Type: int, Description: Normal highscore\nField: expert\\_highscore, Type: int, Description: Expert highscore\nField: super\\_expert\\_highscore, Type: int, Description: Super expert high score\nField: versus\\_rating, Type: int, Description: Versus rating\nField: versus\\_rank, Type: int, Description: Versus rank, enum below\nField: versus\\_won, Type: int, Description: How many courses this user has won in versus\nField: versus\\_lost, Type: int, Description: How many courses this user has lost in versus\nField: versus\\_win\\_streak, Type: int, Description: Versus win streak\nField: versus\\_lose\\_streak, Type: int, Description: Versus lose streak\nField: versus\\_plays, Type: int, Description: Versus plays\nField: versus\\_disconnected, Type: int, Description: Times user has disconnected in versus\nField: coop\\_clears, Type: int, Description: Coop clears\nField: coop\\_plays, Type: int, Description: Coop plays\nField: recent\\_performance, Type: int, Description: Unknown variable relating to versus performance\nField: versus\\_kills, Type: int, Description: Kills in versus, unknown what activities constitute a kill\nField: versus\\_killed\\_by\\_others, Type: int, Description: Deaths in versus from other users, little is known about what activities constitute a death\nField: multiplayer\\_unk13, Type: int, Description: Unknown, relating to multiplayer\nField: multiplayer\\_unk14, Type: int, Description: Unknown, relating to multiplayer\nField: first\\_clears, Type: int, Description: First clears\nField: world\\_records, Type: int, Description: World records\nField: unique\\_super\\_world\\_clears, Type: int, Description: Super world clears\nField: uploaded\\_levels, Type: int, Description: Number of uploaded levels\nField: maximum\\_uploaded\\_levels, Type: int, Description: Maximum number of levels this user may upload\nField: weekly\\_maker\\_points, Type: int, Description: Weekly maker points\nField: last\\_uploaded\\_level, Type: int, Description: UTC timestamp of when this user last uploaded a level\nField: is\\_nintendo\\_employee, Type: bool, Description: Whether this user is an official Nintendo account\nField: comments\\_enabled, Type: bool, Description: Whether this user has comments enabled on their levels\nField: tags\\_enabled, Type: bool, Description: Whether this user has tags enabled on their levels\nField: super\\_world\\_id, Type: string, Description: The ID of this user's super world, blank if they do not have one\nField: unk3, Type: int, Description: Unknown\nField: unk12, Type: int, Description: Unknown\nField: unk16, Type: int, Description: Unknown", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. This can be used to convert back to their string equivalents:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of many different Mario Maker 2 players globally and as such their names could contain harmful language. Harmful depictions could also be present in their Miis, should you choose to render it." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 users dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique play in the level denoted by the 'data\\_id' done by the player denoted by the 'pid', 'pid' is a 64 bit integer stored within a string from database limitations. 'cleared' and 'liked' denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.\n\n\nEach row is a unique user associated denoted by the 'pid'. 'data\\_id' is not used by Nintendo but, like levels, it counts up sequentially and can be used to determine account age. 'mii\\_data' is a 'charinfo' type Switch Mii. 'mii\\_image' can be used with Nintendo's online studio API to generate images:\n\n\n'pose', 'hat', 'shirt' and 'pants' has associated enums described below. 'last\\_active' and 'last\\_uploaded\\_level' are UTC timestamps. 'super\\_world\\_id', if not empty, provides the ID of a super world in 'TheGreatRambler/mm2\\_world'.\n\n\nYou can also download the full dataset. Note that this will download ~1.2GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of this user, while not used internally user codes are generated using this\nField: region, Type: int, Description: User region, enum below\nField: name, Type: string, Description: User name\nField: country, Type: string, Description: User country as a 2 letter ALPHA-2 code\nField: last\\_active, Type: int, Description: UTC timestamp of when this user was last active, not known what constitutes active\nField: mii\\_data, Type: bytes, Description: The CHARINFO blob of this user's Mii\nField: mii\\_image, Type: string, Description: A string that can be fed into Nintendo's studio API to generate an image\nField: pose, Type: int, Description: Pose, enum below\nField: hat, Type: int, Description: Hat, enum below\nField: shirt, Type: int, Description: Shirt, enum below\nField: pants, Type: int, Description: Pants, enum below\nField: wearing\\_outfit, Type: bool, Description: Whether this user is wearing pants\nField: courses\\_played, Type: int, Description: How many courses this user has played\nField: courses\\_cleared, Type: int, Description: How many courses this user has cleared\nField: courses\\_attempted, Type: int, Description: How many courses this user has attempted\nField: courses\\_deaths, Type: int, Description: How many times this user has died\nField: likes, Type: int, Description: How many likes this user has recieved\nField: maker\\_points, Type: int, Description: Maker points\nField: easy\\_highscore, Type: int, Description: Easy highscore\nField: normal\\_highscore, Type: int, Description: Normal highscore\nField: expert\\_highscore, Type: int, Description: Expert highscore\nField: super\\_expert\\_highscore, Type: int, Description: Super expert high score\nField: versus\\_rating, Type: int, Description: Versus rating\nField: versus\\_rank, Type: int, Description: Versus rank, enum below\nField: versus\\_won, Type: int, Description: How many courses this user has won in versus\nField: versus\\_lost, Type: int, Description: How many courses this user has lost in versus\nField: versus\\_win\\_streak, Type: int, Description: Versus win streak\nField: versus\\_lose\\_streak, Type: int, Description: Versus lose streak\nField: versus\\_plays, Type: int, Description: Versus plays\nField: versus\\_disconnected, Type: int, Description: Times user has disconnected in versus\nField: coop\\_clears, Type: int, Description: Coop clears\nField: coop\\_plays, Type: int, Description: Coop plays\nField: recent\\_performance, Type: int, Description: Unknown variable relating to versus performance\nField: versus\\_kills, Type: int, Description: Kills in versus, unknown what activities constitute a kill\nField: versus\\_killed\\_by\\_others, Type: int, Description: Deaths in versus from other users, little is known about what activities constitute a death\nField: multiplayer\\_unk13, Type: int, Description: Unknown, relating to multiplayer\nField: multiplayer\\_unk14, Type: int, Description: Unknown, relating to multiplayer\nField: first\\_clears, Type: int, Description: First clears\nField: world\\_records, Type: int, Description: World records\nField: unique\\_super\\_world\\_clears, Type: int, Description: Super world clears\nField: uploaded\\_levels, Type: int, Description: Number of uploaded levels\nField: maximum\\_uploaded\\_levels, Type: int, Description: Maximum number of levels this user may upload\nField: weekly\\_maker\\_points, Type: int, Description: Weekly maker points\nField: last\\_uploaded\\_level, Type: int, Description: UTC timestamp of when this user last uploaded a level\nField: is\\_nintendo\\_employee, Type: bool, Description: Whether this user is an official Nintendo account\nField: comments\\_enabled, Type: bool, Description: Whether this user has comments enabled on their levels\nField: tags\\_enabled, Type: bool, Description: Whether this user has tags enabled on their levels\nField: super\\_world\\_id, Type: string, Description: The ID of this user's super world, blank if they do not have one\nField: unk3, Type: int, Description: Unknown\nField: unk12, Type: int, Description: Unknown\nField: unk16, Type: int, Description: Unknown", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. This can be used to convert back to their string equivalents:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of many different Mario Maker 2 players globally and as such their names could contain harmful language. Harmful depictions could also be present in their Miis, should you choose to render it." ]
75d9ee5258f795a705fdbfe9fa51e6956df0b71f
# Mario Maker 2 user badges Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user badges dataset consists of 9328 user badges (they are capped to 10k globally) from Nintendo's online service and adds onto `TheGreatRambler/mm2_user`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_badges", split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '1779763691699286988', 'type': 4, 'rank': 6 } ``` Each row is a badge awarded to the player denoted by `pid`. `TheGreatRambler/mm2_user` contains these players. ## Data Structure ### Data Instances ```python { 'pid': '1779763691699286988', 'type': 4, 'rank': 6 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|Player ID| |type|int|The kind of badge, enum below| |rank|int|The rank of badge, enum below| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python BadgeTypes = { 0: "Maker Points (All-Time)", 1: "Endless Challenge (Easy)", 2: "Endless Challenge (Normal)", 3: "Endless Challenge (Expert)", 4: "Endless Challenge (Super Expert)", 5: "Multiplayer Versus", 6: "Number of Clears", 7: "Number of First Clears", 8: "Number of World Records", 9: "Maker Points (Weekly)" } BadgeRanks = { 6: "Bronze", 5: "Silver", 4: "Gold", 3: "Bronze Ribbon", 2: "Silver Ribbon", 1: "Gold Ribbon" } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_user_badges
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:1k<10K", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:17:51+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1k<10K"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 user badges", "tags": ["text-mining"]}
2022-11-11T08:05:05+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1k<10K #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 user badges ========================= Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 user badges dataset consists of 9328 user badges (they are capped to 10k globally) from Nintendo's online service and adds onto 'TheGreatRambler/mm2\_user'. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: Each row is a badge awarded to the player denoted by 'pid'. 'TheGreatRambler/mm2\_user' contains these players. Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: Player ID Field: type, Type: int, Description: The kind of badge, enum below Field: rank, Type: int, Description: The rank of badge, enum below ### Data Splits The dataset only contains a train split. Enums ----- The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nYou can load and iterate through the dataset with the following code:\n\n\nEach row is a badge awarded to the player denoted by 'pid'. 'TheGreatRambler/mm2\\_user' contains these players.\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: Player ID\nField: type, Type: int, Description: The kind of badge, enum below\nField: rank, Type: int, Description: The rank of badge, enum below", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. This can be used to convert back to their string equivalents:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1k<10K #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nYou can load and iterate through the dataset with the following code:\n\n\nEach row is a badge awarded to the player denoted by 'pid'. 'TheGreatRambler/mm2\\_user' contains these players.\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: Player ID\nField: type, Type: int, Description: The kind of badge, enum below\nField: rank, Type: int, Description: The rank of badge, enum below", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. This can be used to convert back to their string equivalents:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
44cde6a1c6338d7706bdabd2bbc42182073b9414
# Mario Maker 2 user plays Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user plays dataset consists of 329.8 million user plays from Nintendo's online service totaling around 2GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_played", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '4920036968545706712', 'data_id': 25548552 } ``` Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~2GB: ```python ds = load_dataset("TheGreatRambler/mm2_user_played", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '4920036968545706712', 'data_id': 25548552 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user played| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_user_played
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:100M<n<1B", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:18:08+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 user plays", "tags": ["text-mining"]}
2022-11-11T08:04:07+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 user plays ======================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 user plays dataset consists of 329.8 million user plays from Nintendo's online service totaling around 2GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique play in the level denoted by the 'data\_id' done by the player denoted by the 'pid'. You can also download the full dataset. Note that this will download ~2GB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string Field: data\_id, Type: int, Description: The data ID of the level this user played ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 user plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique play in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~2GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user played", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 user plays dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique play in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~2GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user played", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
a953a5eeb81d18f6b8dd6c525934797fd2b43248
# Mario Maker 2 user likes Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user likes dataset consists of 105.5 million user likes from Nintendo's online service totaling around 630MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user likes dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_liked", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 25861713 } ``` Each row is a unique like in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~630MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_liked", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 25861713 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user liked| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_user_liked
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:100M<n<1B", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:18:19+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 user likes", "tags": ["text-mining"]}
2022-11-11T08:04:21+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 user likes ======================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 user likes dataset consists of 105.5 million user likes from Nintendo's online service totaling around 630MB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user likes dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique like in the level denoted by the 'data\_id' done by the player denoted by the 'pid'. You can also download the full dataset. Note that this will download ~630MB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string Field: data\_id, Type: int, Description: The data ID of the level this user liked ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 user likes dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique like in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~630MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user liked", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 user likes dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique like in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~630MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user liked", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
35e87e12b511552496fa9ccecd601629fa7f2a1c
# Mario Maker 2 user uploaded Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user uploaded dataset consists of 26.5 million uploaded user levels from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_posted", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '10491033288855085861', 'data_id': 27359486 } ``` Each row is a unique uploaded level denoted by the `data_id` uploaded by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~215MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_posted", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '10491033288855085861', 'data_id': 27359486 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user uploaded| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_user_posted
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:18:30+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 user uploaded", "tags": ["text-mining"]}
2022-11-11T08:03:53+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 user uploaded =========================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 user uploaded dataset consists of 26.5 million uploaded user levels from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique uploaded level denoted by the 'data\_id' uploaded by the player denoted by the 'pid'. You can also download the full dataset. Note that this will download ~215MB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string Field: data\_id, Type: int, Description: The data ID of the level this user uploaded ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique uploaded level denoted by the 'data\\_id' uploaded by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~215MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user uploaded", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 user uploaded dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique uploaded level denoted by the 'data\\_id' uploaded by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~215MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user uploaded", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
15ec37e8e8d6f4806c2fe5947defa8d3e9b41250
# Mario Maker 2 user first clears Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user first clears dataset consists of 17.8 million first clears from Nintendo's online service totaling around 157MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user first clears dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_first_cleared", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 25199891 } ``` Each row is a unique first clear in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~157MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_first_cleared", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 25199891 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user first cleared| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_user_first_cleared
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:18:41+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 user first clears", "tags": ["text-mining"]}
2022-11-11T08:04:34+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 user first clears =============================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 user first clears dataset consists of 17.8 million first clears from Nintendo's online service totaling around 157MB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user first clears dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique first clear in the level denoted by the 'data\_id' done by the player denoted by the 'pid'. You can also download the full dataset. Note that this will download ~157MB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string Field: data\_id, Type: int, Description: The data ID of the level this user first cleared ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 user first clears dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique first clear in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~157MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user first cleared", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 user first clears dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique first clear in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~157MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user first cleared", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
f653680f7713e6f89eea9fc82bd96cbd498010cc
# Mario Maker 2 user world records Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user world records dataset consists of 15.3 million world records from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user world records dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_world_record", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 24866513 } ``` Each row is a unique world record in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~215MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_world_record", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 24866513 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user got world record on| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_user_world_record
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:10M<n<100M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:18:54+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 user world records", "tags": ["text-mining"]}
2022-11-11T08:03:39+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 user world records ================================ Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 user world records dataset consists of 15.3 million world records from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user world records dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique world record in the level denoted by the 'data\_id' done by the player denoted by the 'pid'. You can also download the full dataset. Note that this will download ~215MB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string Field: data\_id, Type: int, Description: The data ID of the level this user got world record on ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 user world records dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique world record in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~215MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user got world record on", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-10M<n<100M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 user world records dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique world record in the level denoted by the 'data\\_id' done by the player denoted by the 'pid'.\n\n\nYou can also download the full dataset. Note that this will download ~215MB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of this user, an unsigned 64 bit integer as a string\nField: data\\_id, Type: int, Description: The data ID of the level this user got world record on", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
8640ff2491a3298963d72a0f15d28af1919b8b19
# Mario Maker 2 super worlds Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 super worlds dataset consists of 289 thousand super worlds from Nintendo's online service totaling around 13.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_world", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'world_id': 'c96012bef256ba6b_20200513204805563301', 'worlds': 1, 'levels': 5, 'planet_type': 0, 'created': 1589420886, 'unk1': [some binary data], 'unk5': 3, 'unk6': 1, 'unk7': 1, 'thumbnail': [some binary data] } ``` Each row is a unique super world denoted by the `world_id` created by the player denoted by the `pid`. Thumbnails are binary PNGs. `unk1` describes the super world itself, including the world map, but its format is unknown as of now. You can also download the full dataset. Note that this will download ~13.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_world", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'world_id': 'c96012bef256ba6b_20200513204805563301', 'worlds': 1, 'levels': 5, 'planet_type': 0, 'created': 1589420886, 'unk1': [some binary data], 'unk5': 3, 'unk6': 1, 'unk7': 1, 'thumbnail': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of the user who created this super world| |world_id|string|World ID| |worlds|int|Number of worlds| |levels|int|Number of levels| |planet_type|int|Planet type, enum below| |created|int|UTC timestamp of when this super world was created| |unk1|bytes|Unknown| |unk5|int|Unknown| |unk6|int|Unknown| |unk7|int|Unknown| |thumbnail|bytes|The thumbnail, as a JPEG binary| |thumbnail_url|string|The old URL of this thumbnail| |thumbnail_size|int|The filesize of this thumbnail| |thumbnail_filename|string|The filename of this thumbnail| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python SuperWorldPlanetType = { 0: "Earth", 1: "Moon", 2: "Sand", 3: "Green", 4: "Ice", 5: "Ringed", 6: "Red", 7: "Spiral" } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails.
TheGreatRambler/mm2_world
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:19:10+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 super worlds", "tags": ["text-mining"]}
2022-11-11T08:08:15+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 super worlds ========================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 super worlds dataset consists of 289 thousand super worlds from Nintendo's online service totaling around 13.5GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a unique super world denoted by the 'world\_id' created by the player denoted by the 'pid'. Thumbnails are binary PNGs. 'unk1' describes the super world itself, including the world map, but its format is unknown as of now. You can also download the full dataset. Note that this will download ~13.5GB: Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of the user who created this super world Field: world\_id, Type: string, Description: World ID Field: worlds, Type: int, Description: Number of worlds Field: levels, Type: int, Description: Number of levels Field: planet\_type, Type: int, Description: Planet type, enum below Field: created, Type: int, Description: UTC timestamp of when this super world was created Field: unk1, Type: bytes, Description: Unknown Field: unk5, Type: int, Description: Unknown Field: unk6, Type: int, Description: Unknown Field: unk7, Type: int, Description: Unknown Field: thumbnail, Type: bytes, Description: The thumbnail, as a JPEG binary Field: thumbnail\_url, Type: string, Description: The old URL of this thumbnail Field: thumbnail\_size, Type: int, Description: The filesize of this thumbnail Field: thumbnail\_filename, Type: string, Description: The filename of this thumbnail ### Data Splits The dataset only contains a train split. Enums ----- The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails.
[ "### How to use it\n\n\nThe Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique super world denoted by the 'world\\_id' created by the player denoted by the 'pid'. Thumbnails are binary PNGs. 'unk1' describes the super world itself, including the world map, but its format is unknown as of now.\n\n\nYou can also download the full dataset. Note that this will download ~13.5GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of the user who created this super world\nField: world\\_id, Type: string, Description: World ID\nField: worlds, Type: int, Description: Number of worlds\nField: levels, Type: int, Description: Number of levels\nField: planet\\_type, Type: int, Description: Planet type, enum below\nField: created, Type: int, Description: UTC timestamp of when this super world was created\nField: unk1, Type: bytes, Description: Unknown\nField: unk5, Type: int, Description: Unknown\nField: unk6, Type: int, Description: Unknown\nField: unk7, Type: int, Description: Unknown\nField: thumbnail, Type: bytes, Description: The thumbnail, as a JPEG binary\nField: thumbnail\\_url, Type: string, Description: The old URL of this thumbnail\nField: thumbnail\\_size, Type: int, Description: The filesize of this thumbnail\nField: thumbnail\\_filename, Type: string, Description: The filename of this thumbnail", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. This can be used to convert back to their string equivalents:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a unique super world denoted by the 'world\\_id' created by the player denoted by the 'pid'. Thumbnails are binary PNGs. 'unk1' describes the super world itself, including the world map, but its format is unknown as of now.\n\n\nYou can also download the full dataset. Note that this will download ~13.5GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of the user who created this super world\nField: world\\_id, Type: string, Description: World ID\nField: worlds, Type: int, Description: Number of worlds\nField: levels, Type: int, Description: Number of levels\nField: planet\\_type, Type: int, Description: Planet type, enum below\nField: created, Type: int, Description: UTC timestamp of when this super world was created\nField: unk1, Type: bytes, Description: Unknown\nField: unk5, Type: int, Description: Unknown\nField: unk6, Type: int, Description: Unknown\nField: unk7, Type: int, Description: Unknown\nField: thumbnail, Type: bytes, Description: The thumbnail, as a JPEG binary\nField: thumbnail\\_url, Type: string, Description: The old URL of this thumbnail\nField: thumbnail\\_size, Type: int, Description: The filesize of this thumbnail\nField: thumbnail\\_filename, Type: string, Description: The filename of this thumbnail", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. This can be used to convert back to their string equivalents:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails." ]
acd1e2f4c3e10eeb4315d04d44371cf531e31bcf
# Mario Maker 2 super world levels Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 super world levels dataset consists of 3.3 million super world levels from Nintendo's online service and adds onto `TheGreatRambler/mm2_world`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_world_levels", split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 19170881, 'ninjis': 23 } ``` Each row is a level within a super world owned by player `pid` that is denoted by `data_id`. Each level contains some number of ninjis `ninjis`, a rough metric for their popularity. ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 19170881, 'ninjis': 23 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of the user who created the super world with this level| |data_id|int|The data ID of the level| |ninjis|int|Number of ninjis shown on this level| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_world_levels
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:19:22+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 super world levels", "tags": ["text-mining"]}
2022-11-11T08:03:22+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 super world levels ================================ Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 super world levels dataset consists of 3.3 million super world levels from Nintendo's online service and adds onto 'TheGreatRambler/mm2\_world'. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: Each row is a level within a super world owned by player 'pid' that is denoted by 'data\_id'. Each level contains some number of ninjis 'ninjis', a rough metric for their popularity. Data Structure -------------- ### Data Instances ### Data Fields Field: pid, Type: string, Description: The player ID of the user who created the super world with this level Field: data\_id, Type: int, Description: The data ID of the level Field: ninjis, Type: int, Description: Number of ninjis shown on this level ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nYou can load and iterate through the dataset with the following code:\n\n\nEach row is a level within a super world owned by player 'pid' that is denoted by 'data\\_id'. Each level contains some number of ninjis 'ninjis', a rough metric for their popularity.\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of the user who created the super world with this level\nField: data\\_id, Type: int, Description: The data ID of the level\nField: ninjis, Type: int, Description: Number of ninjis shown on this level", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nYou can load and iterate through the dataset with the following code:\n\n\nEach row is a level within a super world owned by player 'pid' that is denoted by 'data\\_id'. Each level contains some number of ninjis 'ninjis', a rough metric for their popularity.\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: pid, Type: string, Description: The player ID of the user who created the super world with this level\nField: data\\_id, Type: int, Description: The data ID of the level\nField: ninjis, Type: int, Description: Number of ninjis shown on this level", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
14d9b109a50274f2a278c22c01af335da683965a
# Mario Maker 2 ninjis Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 ninjis dataset consists of 3 million ninji replays from Nintendo's online service totaling around 12.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 12171034, 'pid': '4748613890518923485', 'time': 83388, 'replay': [some binary data] } ``` Each row is a ninji run in the level denoted by the `data_id` done by the player denoted by the `pid`, The length of this ninji run is `time` in milliseconds. `replay` is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows: ```python from datasets import load_dataset import zlib import struct ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train") row = next(iter(ds)) replay = zlib.decompress(row["replay"]) frames = struct.unpack(">I", replay[0x10:0x14])[0] character = replay[0x14] character_mapping = { 0: "Mario", 1: "Luigi", 2: "Toad", 3: "Toadette" } # player_state is between 0 and 14 and varies between gamestyles # as outlined below. Determining the gamestyle of a particular run # and rendering the level being played requires TheGreatRambler/mm2_ninji_level player_state_base = { 0: "Run/Walk", 1: "Jump", 2: "Swim", 3: "Climbing", 5: "Sliding", 7: "Dry bones shell", 8: "Clown car", 9: "Cloud", 10: "Boot", 11: "Walking cat" } player_state_nsmbu = { 4: "Sliding", 6: "Turnaround", 10: "Yoshi", 12: "Acorn suit", 13: "Propeller active", 14: "Propeller neutral" } player_state_sm3dw = { 4: "Sliding", 6: "Turnaround", 7: "Clear pipe", 8: "Cat down attack", 13: "Propeller active", 14: "Propeller neutral" } player_state_smb1 = { 4: "Link down slash", 5: "Crouching" } player_state_smw = { 10: "Yoshi", 12: "Cape" } print("Frames: %d\nCharacter: %s" % (frames, character_mapping[character])) current_offset = 0x3C # Ninji updates are reported every 4 frames for i in range((frames + 2) // 4): flags = replay[current_offset] >> 4 player_state = replay[current_offset] & 0x0F current_offset += 1 x = struct.unpack("<H", replay[current_offset:current_offset + 2])[0] current_offset += 2 y = struct.unpack("<H", replay[current_offset:current_offset + 2])[0] current_offset += 2 if flags & 0b00000110: unk1 = replay[current_offset] current_offset += 1 in_subworld = flags & 0b00001000 print("Frame %d:\n Flags: %s,\n Animation state: %d,\n X: %d,\n Y: %d,\n In subworld: %s" % (i, bin(flags), player_state, x, y, in_subworld)) #OUTPUT: Frames: 5006 Character: Mario Frame 0: Flags: 0b0, Animation state: 0, X: 2672, Y: 2288, In subworld: 0 Frame 1: Flags: 0b0, Animation state: 0, X: 2682, Y: 2288, In subworld: 0 Frame 2: Flags: 0b0, Animation state: 0, X: 2716, Y: 2288, In subworld: 0 ... Frame 1249: Flags: 0b0, Animation state: 1, X: 59095, Y: 3749, In subworld: 0 Frame 1250: Flags: 0b0, Animation state: 1, X: 59246, Y: 3797, In subworld: 0 Frame 1251: Flags: 0b0, Animation state: 1, X: 59402, Y: 3769, In subworld: 0 ``` You can also download the full dataset. Note that this will download ~12.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_ninji", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 12171034, 'pid': '4748613890518923485', 'time': 83388, 'replay': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this run occured in| |pid|string|Player ID of the player| |time|int|Length in milliseconds of the run| |replay|bytes|Replay file of this run| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
TheGreatRambler/mm2_ninji
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:19:35+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 ninjis", "tags": ["text-mining"]}
2022-11-11T08:05:22+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 ninjis ==================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 ninjis dataset consists of 3 million ninji replays from Nintendo's online service totaling around 12.5GB of data. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code: Each row is a ninji run in the level denoted by the 'data\_id' done by the player denoted by the 'pid', The length of this ninji run is 'time' in milliseconds. 'replay' is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows: You can also download the full dataset. Note that this will download ~12.5GB: Data Structure -------------- ### Data Instances ### Data Fields Field: data\_id, Type: int, Description: The data ID of the level this run occured in Field: pid, Type: string, Description: Player ID of the player Field: time, Type: int, Description: Length in milliseconds of the run Field: replay, Type: bytes, Description: Replay file of this run ### Data Splits The dataset only contains a train split. Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- The dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nThe Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a ninji run in the level denoted by the 'data\\_id' done by the player denoted by the 'pid', The length of this ninji run is 'time' in milliseconds.\n\n\n'replay' is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows:\n\n\nYou can also download the full dataset. Note that this will download ~12.5GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of the level this run occured in\nField: pid, Type: string, Description: Player ID of the player\nField: time, Type: int, Description: Length in milliseconds of the run\nField: replay, Type: bytes, Description: Replay file of this run", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nThe Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of 'datasets'. You can load and iterate through the dataset with the following code:\n\n\nEach row is a ninji run in the level denoted by the 'data\\_id' done by the player denoted by the 'pid', The length of this ninji run is 'time' in milliseconds.\n\n\n'replay' is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows:\n\n\nYou can also download the full dataset. Note that this will download ~12.5GB:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of the level this run occured in\nField: pid, Type: string, Description: Player ID of the player\nField: time, Type: int, Description: Length in milliseconds of the run\nField: replay, Type: bytes, Description: Replay file of this run", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset contains no harmful language or depictions." ]
b5f8a698461f84a65ae06ce54705913b6e0928b8
# Mario Maker 2 ninji levels Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 ninji levels dataset consists of 21 ninji levels from Nintendo's online service and aids `TheGreatRambler/mm2_ninji`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_ninji_level", split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 12171034, 'name': 'Rolling Snowballs', 'description': 'Make your way through the snowfields, and keep an eye\nout for Spikes and Snow Pokeys! Stomping on Snow Pokeys\nwill turn them into small snowballs, which you can pick up\nand throw. Play this course as many times as you want,\nand see if you can find the fastest way to the finish!', 'uploaded': 1575532800, 'ended': 1576137600, 'gamestyle': 3, 'theme': 6, 'medal_time': 26800, 'clear_condition': 0, 'clear_condition_magnitude': 0, 'unk3_0': 1309513, 'unk3_1': 62629737, 'unk3_2': 4355893, 'unk5': 1, 'unk6': 0, 'unk9': 0, 'level_data': [some binary data] } ``` Each row is a ninji level denoted by `data_id`. `TheGreatRambler/mm2_ninji` refers to these levels. `level_data` is the same format used in `TheGreatRambler/mm2_level` and the provided Kaitai struct file and `level.py` can be used to decode it: ```python from datasets import load_dataset from kaitaistruct import KaitaiStream from io import BytesIO from level import Level import zlib ds = load_dataset("TheGreatRambler/mm2_ninji_level", split="train") level_data = next(iter(ds))["level_data"] level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data)))) # NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct) # must iterate by object_count or null objects will be included for i in range(level.overworld.object_count): obj = level.overworld.objects[i] print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id)) #OUTPUT: X: 1200 Y: 400 ID: ObjId.block X: 1360 Y: 400 ID: ObjId.block X: 1360 Y: 240 ID: ObjId.block X: 1520 Y: 240 ID: ObjId.block X: 1680 Y: 240 ID: ObjId.block X: 1680 Y: 400 ID: ObjId.block X: 1840 Y: 400 ID: ObjId.block X: 2000 Y: 400 ID: ObjId.block X: 2160 Y: 400 ID: ObjId.block X: 2320 Y: 400 ID: ObjId.block X: 2480 Y: 560 ID: ObjId.block X: 2480 Y: 720 ID: ObjId.block X: 2480 Y: 880 ID: ObjId.block X: 2160 Y: 880 ID: ObjId.block ``` ## Data Structure ### Data Instances ```python { 'data_id': 12171034, 'name': 'Rolling Snowballs', 'description': 'Make your way through the snowfields, and keep an eye\nout for Spikes and Snow Pokeys! Stomping on Snow Pokeys\nwill turn them into small snowballs, which you can pick up\nand throw. Play this course as many times as you want,\nand see if you can find the fastest way to the finish!', 'uploaded': 1575532800, 'ended': 1576137600, 'gamestyle': 3, 'theme': 6, 'medal_time': 26800, 'clear_condition': 0, 'clear_condition_magnitude': 0, 'unk3_0': 1309513, 'unk3_1': 62629737, 'unk3_2': 4355893, 'unk5': 1, 'unk6': 0, 'unk9': 0, 'level_data': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of this ninji level| |name|string|Name| |description|string|Description| |uploaded|int|UTC timestamp of when this was uploaded| |ended|int|UTC timestamp of when this event ended| |gamestyle|int|Gamestyle, enum below| |theme|int|Theme, enum below| |medal_time|int|Time to get a medal in milliseconds| |clear_condition|int|Clear condition, enum below| |clear_condition_magnitude|int|If applicable, the magnitude of the clear condition| |unk3_0|int|Unknown| |unk3_1|int|Unknown| |unk3_2|int|Unknown| |unk5|int|Unknown| |unk6|int|Unknown| |unk9|int|Unknown| |level_data|bytes|The GZIP compressed decrypted level data, a kaitai struct file is provided to read this| |one_screen_thumbnail|bytes|The one screen course thumbnail, as a JPEG binary| |one_screen_thumbnail_url|string|The old URL of this thumbnail| |one_screen_thumbnail_size|int|The filesize of this thumbnail| |one_screen_thumbnail_filename|string|The filename of this thumbnail| |entire_thumbnail|bytes|The entire course thumbnail, as a JPEG binary| |entire_thumbnail_url|string|The old URL of this thumbnail| |entire_thumbnail_size|int|The filesize of this thumbnail| |entire_thumbnail_filename|string|The filename of this thumbnail| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. They match those used by `TheGreatRambler/mm2_level` for the most part, but they are reproduced below: ```python GameStyles = { 0: "SMB1", 1: "SMB3", 2: "SMW", 3: "NSMBU", 4: "SM3DW" } CourseThemes = { 0: "Overworld", 1: "Underground", 2: "Castle", 3: "Airship", 4: "Underwater", 5: "Ghost house", 6: "Snow", 7: "Desert", 8: "Sky", 9: "Forest" } ClearConditions = { 137525990: "Reach the goal without landing after leaving the ground.", 199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).", 272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).", 375673178: "Reach the goal without taking damage.", 426197923: "Reach the goal as Boomerang Mario.", 436833616: "Reach the goal while wearing a Shoe.", 713979835: "Reach the goal as Fire Mario.", 744927294: "Reach the goal as Frog Mario.", 751004331: "Reach the goal after defeating at least/all (n) Larry(s).", 900050759: "Reach the goal as Raccoon Mario.", 947659466: "Reach the goal after defeating at least/all (n) Blooper(s).", 976173462: "Reach the goal as Propeller Mario.", 994686866: "Reach the goal while wearing a Propeller Box.", 998904081: "Reach the goal after defeating at least/all (n) Spike(s).", 1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).", 1051433633: "Reach the goal while holding a Koopa Shell.", 1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).", 1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).", 1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).", 1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.", 1151250770: "Reach the goal while wearing a Goomba Mask.", 1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.", 1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).", 1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).", 1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.", 1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).", 1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).", 1283945123: "Reach the goal on a Lakitu's Cloud.", 1344044032: "Reach the goal after defeating at least/all (n) Boo(s).", 1425973877: "Reach the goal after defeating at least/all (n) Roy(s).", 1429902736: "Reach the goal while holding a Trampoline.", 1431944825: "Reach the goal after defeating at least/all (n) Morton(s).", 1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).", 1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).", 1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).", 1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).", 1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.", 1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.", 1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).", 1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).", 1780278293: "Reach the goal as Superball Mario.", 1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).", 1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).", 2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).", 2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).", 2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).", 2076496776: "Reach the goal while wearing a Bullet Bill Mask.", 2089161429: "Reach the goal as Big Mario.", 2111528319: "Reach the goal as Cat Mario.", 2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).", 2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).", 2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).", 2549654281: "Reach the goal while wearing a Dry Bones Shell.", 2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).", 2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).", 2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).", 2855236681: "Reach the goal as Flying Squirrel Mario.", 3036298571: "Reach the goal as Buzzy Mario.", 3074433106: "Reach the goal as Builder Mario.", 3146932243: "Reach the goal as Cape Mario.", 3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).", 3206222275: "Reach the goal while wearing a Cannon Box.", 3314955857: "Reach the goal as Link.", 3342591980: "Reach the goal while you have Super Star invincibility.", 3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).", 3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).", 3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).", 3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).", 3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).", 3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).", 3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).", 3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).", 3513732174: "Reach the goal as SMB2 Mario.", 3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.", 3725246406: "Reach the goal as Spiny Mario.", 3730243509: "Reach the goal in a Koopa Troopa Car.", 3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).", 3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.", 3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.", 3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).", 3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).", 3874680510: "Reach the goal after breaking at least/all (n) Crates(s).", 3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).", 3977257962: "Reach the goal as Super Mario.", 4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).", 4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).", 4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).", 4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).", 4153835197: "Reach the goal as Balloon Mario.", 4172105156: "Reach the goal while wearing a Red POW Box.", 4209535561: "Reach the Goal while riding Yoshi.", 4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).", 4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)." } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data As these 21 levels were made and vetted by Nintendo the dataset contains no harmful language or depictions.
TheGreatRambler/mm2_ninji_level
[ "task_categories:other", "task_categories:object-detection", "task_categories:text-retrieval", "task_categories:token-classification", "task_categories:text-generation", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:multilingual", "license:cc-by-nc-sa-4.0", "text-mining", "region:us" ]
2022-09-18T19:19:47+00:00
{"language": ["multilingual"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["other", "object-detection", "text-retrieval", "token-classification", "text-generation"], "task_ids": [], "pretty_name": "Mario Maker 2 ninji levels", "tags": ["text-mining"]}
2022-11-11T08:08:00+00:00
[]
[ "multilingual" ]
TAGS #task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-n<1K #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us
Mario Maker 2 ninji levels ========================== Part of the Mario Maker 2 Dataset Collection Dataset Description ------------------- The Mario Maker 2 ninji levels dataset consists of 21 ninji levels from Nintendo's online service and aids 'TheGreatRambler/mm2\_ninji'. The dataset was created using the self-hosted Mario Maker 2 api over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: Each row is a ninji level denoted by 'data\_id'. 'TheGreatRambler/mm2\_ninji' refers to these levels. 'level\_data' is the same format used in 'TheGreatRambler/mm2\_level' and the provided Kaitai struct file and 'URL' can be used to decode it: Data Structure -------------- ### Data Instances ### Data Fields Field: data\_id, Type: int, Description: The data ID of this ninji level Field: name, Type: string, Description: Name Field: description, Type: string, Description: Description Field: uploaded, Type: int, Description: UTC timestamp of when this was uploaded Field: ended, Type: int, Description: UTC timestamp of when this event ended Field: gamestyle, Type: int, Description: Gamestyle, enum below Field: theme, Type: int, Description: Theme, enum below Field: medal\_time, Type: int, Description: Time to get a medal in milliseconds Field: clear\_condition, Type: int, Description: Clear condition, enum below Field: clear\_condition\_magnitude, Type: int, Description: If applicable, the magnitude of the clear condition Field: unk3\_0, Type: int, Description: Unknown Field: unk3\_1, Type: int, Description: Unknown Field: unk3\_2, Type: int, Description: Unknown Field: unk5, Type: int, Description: Unknown Field: unk6, Type: int, Description: Unknown Field: unk9, Type: int, Description: Unknown Field: level\_data, Type: bytes, Description: The GZIP compressed decrypted level data, a kaitai struct file is provided to read this Field: one\_screen\_thumbnail, Type: bytes, Description: The one screen course thumbnail, as a JPEG binary Field: one\_screen\_thumbnail\_url, Type: string, Description: The old URL of this thumbnail Field: one\_screen\_thumbnail\_size, Type: int, Description: The filesize of this thumbnail Field: one\_screen\_thumbnail\_filename, Type: string, Description: The filename of this thumbnail Field: entire\_thumbnail, Type: bytes, Description: The entire course thumbnail, as a JPEG binary Field: entire\_thumbnail\_url, Type: string, Description: The old URL of this thumbnail Field: entire\_thumbnail\_size, Type: int, Description: The filesize of this thumbnail Field: entire\_thumbnail\_filename, Type: string, Description: The filename of this thumbnail ### Data Splits The dataset only contains a train split. Enums ----- The dataset contains some enum integer fields. They match those used by 'TheGreatRambler/mm2\_level' for the most part, but they are reproduced below: Dataset Creation ---------------- The dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. Considerations for Using the Data --------------------------------- As these 21 levels were made and vetted by Nintendo the dataset contains no harmful language or depictions.
[ "### How to use it\n\n\nYou can load and iterate through the dataset with the following code:\n\n\nEach row is a ninji level denoted by 'data\\_id'. 'TheGreatRambler/mm2\\_ninji' refers to these levels. 'level\\_data' is the same format used in 'TheGreatRambler/mm2\\_level' and the provided Kaitai struct file and 'URL' can be used to decode it:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of this ninji level\nField: name, Type: string, Description: Name\nField: description, Type: string, Description: Description\nField: uploaded, Type: int, Description: UTC timestamp of when this was uploaded\nField: ended, Type: int, Description: UTC timestamp of when this event ended\nField: gamestyle, Type: int, Description: Gamestyle, enum below\nField: theme, Type: int, Description: Theme, enum below\nField: medal\\_time, Type: int, Description: Time to get a medal in milliseconds\nField: clear\\_condition, Type: int, Description: Clear condition, enum below\nField: clear\\_condition\\_magnitude, Type: int, Description: If applicable, the magnitude of the clear condition\nField: unk3\\_0, Type: int, Description: Unknown\nField: unk3\\_1, Type: int, Description: Unknown\nField: unk3\\_2, Type: int, Description: Unknown\nField: unk5, Type: int, Description: Unknown\nField: unk6, Type: int, Description: Unknown\nField: unk9, Type: int, Description: Unknown\nField: level\\_data, Type: bytes, Description: The GZIP compressed decrypted level data, a kaitai struct file is provided to read this\nField: one\\_screen\\_thumbnail, Type: bytes, Description: The one screen course thumbnail, as a JPEG binary\nField: one\\_screen\\_thumbnail\\_url, Type: string, Description: The old URL of this thumbnail\nField: one\\_screen\\_thumbnail\\_size, Type: int, Description: The filesize of this thumbnail\nField: one\\_screen\\_thumbnail\\_filename, Type: string, Description: The filename of this thumbnail\nField: entire\\_thumbnail, Type: bytes, Description: The entire course thumbnail, as a JPEG binary\nField: entire\\_thumbnail\\_url, Type: string, Description: The old URL of this thumbnail\nField: entire\\_thumbnail\\_size, Type: int, Description: The filesize of this thumbnail\nField: entire\\_thumbnail\\_filename, Type: string, Description: The filename of this thumbnail", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. They match those used by 'TheGreatRambler/mm2\\_level' for the most part, but they are reproduced below:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nAs these 21 levels were made and vetted by Nintendo the dataset contains no harmful language or depictions." ]
[ "TAGS\n#task_categories-other #task_categories-object-detection #task_categories-text-retrieval #task_categories-token-classification #task_categories-text-generation #multilinguality-multilingual #size_categories-n<1K #source_datasets-original #language-multilingual #license-cc-by-nc-sa-4.0 #text-mining #region-us \n", "### How to use it\n\n\nYou can load and iterate through the dataset with the following code:\n\n\nEach row is a ninji level denoted by 'data\\_id'. 'TheGreatRambler/mm2\\_ninji' refers to these levels. 'level\\_data' is the same format used in 'TheGreatRambler/mm2\\_level' and the provided Kaitai struct file and 'URL' can be used to decode it:\n\n\nData Structure\n--------------", "### Data Instances", "### Data Fields\n\n\nField: data\\_id, Type: int, Description: The data ID of this ninji level\nField: name, Type: string, Description: Name\nField: description, Type: string, Description: Description\nField: uploaded, Type: int, Description: UTC timestamp of when this was uploaded\nField: ended, Type: int, Description: UTC timestamp of when this event ended\nField: gamestyle, Type: int, Description: Gamestyle, enum below\nField: theme, Type: int, Description: Theme, enum below\nField: medal\\_time, Type: int, Description: Time to get a medal in milliseconds\nField: clear\\_condition, Type: int, Description: Clear condition, enum below\nField: clear\\_condition\\_magnitude, Type: int, Description: If applicable, the magnitude of the clear condition\nField: unk3\\_0, Type: int, Description: Unknown\nField: unk3\\_1, Type: int, Description: Unknown\nField: unk3\\_2, Type: int, Description: Unknown\nField: unk5, Type: int, Description: Unknown\nField: unk6, Type: int, Description: Unknown\nField: unk9, Type: int, Description: Unknown\nField: level\\_data, Type: bytes, Description: The GZIP compressed decrypted level data, a kaitai struct file is provided to read this\nField: one\\_screen\\_thumbnail, Type: bytes, Description: The one screen course thumbnail, as a JPEG binary\nField: one\\_screen\\_thumbnail\\_url, Type: string, Description: The old URL of this thumbnail\nField: one\\_screen\\_thumbnail\\_size, Type: int, Description: The filesize of this thumbnail\nField: one\\_screen\\_thumbnail\\_filename, Type: string, Description: The filename of this thumbnail\nField: entire\\_thumbnail, Type: bytes, Description: The entire course thumbnail, as a JPEG binary\nField: entire\\_thumbnail\\_url, Type: string, Description: The old URL of this thumbnail\nField: entire\\_thumbnail\\_size, Type: int, Description: The filesize of this thumbnail\nField: entire\\_thumbnail\\_filename, Type: string, Description: The filename of this thumbnail", "### Data Splits\n\n\nThe dataset only contains a train split.\n\n\nEnums\n-----\n\n\nThe dataset contains some enum integer fields. They match those used by 'TheGreatRambler/mm2\\_level' for the most part, but they are reproduced below:\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created over a little more than a month in Febuary 2022 using the self hosted Mario Maker 2 api. As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nAs these 21 levels were made and vetted by Nintendo the dataset contains no harmful language or depictions." ]
448fdb1bc7b2d09e46881c4541a14d796a3d41e8
# Dataset Card for "yerevann/coco-karpathy" The Karpathy split of COCO for image captioning.
yerevann/coco-karpathy
[ "task_categories:image-to-text", "task_ids:image-captioning", "language:en", "coco", "image-captioning", "region:us" ]
2022-09-18T21:50:19+00:00
{"language": ["en"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "COCO Karpathy split", "tags": ["coco", "image-captioning"]}
2022-10-31T11:24:01+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-text #task_ids-image-captioning #language-English #coco #image-captioning #region-us
# Dataset Card for "yerevann/coco-karpathy" The Karpathy split of COCO for image captioning.
[ "# Dataset Card for \"yerevann/coco-karpathy\"\n\nThe Karpathy split of COCO for image captioning." ]
[ "TAGS\n#task_categories-image-to-text #task_ids-image-captioning #language-English #coco #image-captioning #region-us \n", "# Dataset Card for \"yerevann/coco-karpathy\"\n\nThe Karpathy split of COCO for image captioning." ]
8a86b23b745d215c4dbbb058f0c41185c7fab734
# Dataset Card for SOAP
jamil/soap_notes
[ "license:apache-2.0", "region:us" ]
2022-09-18T23:54:25+00:00
{"license": "apache-2.0"}
2022-09-19T00:33:08+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Dataset Card for SOAP
[ "# Dataset Card for SOAP" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Dataset Card for SOAP" ]
8447c236d6c6bf4986eb3e4330a41d258b727362
# Dataset Description This is a dataset of emotional contexts that was retrieved from the original EmpatheticDialogues (ED) dataset. Respondents were asked to describe an event that was associated with a particular emotion label (i.e. p(event|emotion). There are 32 emotion labels in total. There are 19209, 2756, and 2542 instances of emotional descriptions in the train, valid, and test set, respectively.
bdotloh/empathetic-dialogues-contexts
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "multilinguality:monolingual", "language:en", "region:us" ]
2022-09-19T04:58:21+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"]}
2022-09-21T05:12:44+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #multilinguality-monolingual #language-English #region-us
# Dataset Description This is a dataset of emotional contexts that was retrieved from the original EmpatheticDialogues (ED) dataset. Respondents were asked to describe an event that was associated with a particular emotion label (i.e. p(event|emotion). There are 32 emotion labels in total. There are 19209, 2756, and 2542 instances of emotional descriptions in the train, valid, and test set, respectively.
[ "# Dataset Description\nThis is a dataset of emotional contexts that was retrieved from the original EmpatheticDialogues (ED) dataset. Respondents were asked to describe an event that was associated with a particular emotion label (i.e. p(event|emotion). \n\nThere are 32 emotion labels in total.\nThere are 19209, 2756, and 2542 instances of emotional descriptions in the train, valid, and test set, respectively." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #multilinguality-monolingual #language-English #region-us \n", "# Dataset Description\nThis is a dataset of emotional contexts that was retrieved from the original EmpatheticDialogues (ED) dataset. Respondents were asked to describe an event that was associated with a particular emotion label (i.e. p(event|emotion). \n\nThere are 32 emotion labels in total.\nThere are 19209, 2756, and 2542 instances of emotional descriptions in the train, valid, and test set, respectively." ]
7d5077a33a8336d2f53095765e22cf9987443996
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: morenolq/bart-base-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model.
autoevaluate/autoeval-eval-xsum-default-ca7304-1504954794
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T06:52:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "morenolq/bart-base-xsum", "metrics": ["bertscore"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-19T07:01:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: morenolq/bart-base-xsum * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @morenolq for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: morenolq/bart-base-xsum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: morenolq/bart-base-xsum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
95b112abeaf5782f4326d869e1081816556a5d16
A sampled version of the [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) dataset for the German-English pair, containing 1M train entries.
j0hngou/ccmatrix_de-en
[ "language:en", "language:de", "region:us" ]
2022-09-19T12:08:48+00:00
{"language": ["en", "de"]}
2022-09-26T15:35:03+00:00
[]
[ "en", "de" ]
TAGS #language-English #language-German #region-us
A sampled version of the CCMatrix dataset for the German-English pair, containing 1M train entries.
[]
[ "TAGS\n#language-English #language-German #region-us \n" ]
0f9bec2b0fbbfc8643ae5442903d63dd701ff51b
# Dataset Card for Literary fictions of Gallica ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://doi.org/10.5281/zenodo.4660197 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The collection "Fiction littéraire de Gallica" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…) This corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes. The extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of "novel" that is generally contemporary of the publication. A French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances ``` { 'main_id': 'bpt6k97892392_p174', 'catalogue_id': 'cb31636383z', 'titre': "L'île du docteur Moreau", 'nom_auteur': 'Wells', 'prenom_auteur': 'Herbert George', 'date': 1946, 'document_ocr': 99, 'date_enligne': '07/08/2017', 'gallica': 'http://gallica.bnf.fr/ark:/12148/bpt6k97892392/f174', 'page': 174, 'texte': "_p_ dans leur expression et leurs gestes souples, d au- c tres semblables à des estropiés, ou si étrangement i défigurées qu'on eût dit les êtres qui hantent nos M rêves les plus sinistres. Au delà, se trouvaient d 'un côté les lignes onduleuses -des roseaux, de l'autre, s un dense enchevêtrement de palmiers nous séparant du ravin des 'huttes et, vers le Nord, l horizon brumeux du Pacifique. - _p_ — Soixante-deux, soixante-trois, compta Mo- H reau, il en manque quatre. J _p_ — Je ne vois pas l'Homme-Léopard, dis-je. | Tout à coup Moreau souffla une seconde fois dans son cor, et à ce son toutes les bêtes humai- ' nes se roulèrent et se vautrèrent dans la poussière. Alors se glissant furtivement hors des roseaux, rampant presque et essayant de rejoindre le cercle des autres derrière le dos de Moreau, parut l'Homme-Léopard. Le dernier qui vint fut le petit Homme-Singe. Les autres, échauffés et fatigués par leurs gesticulations, lui lancèrent de mauvais regards. _p_ — Assez! cria Moreau, de sa voix sonore et ferme. Toutes les bêtes s'assirent sur leurs talons et cessèrent leur adoration. - _p_ — Où est celui |qui enseigne la Loi? demanda Moreau." } ``` ### Data Fields - `main_id`: Unique identifier of the page of the roman. - `catalogue_id`: Identifier of the edition in the BNF catalogue. - `titre`: Title of the edition as it appears in the catalog. - `nom_auteur`: Author's name. - `prenom_auteur`: Author's first name. - `date`: Year of edition. - `document_ocr`: Estimated quality of ocerization for the whole document as a percentage of words probably well recognized (from 1-100). - `date_enligne`: Date of the online publishing of the digitization on Gallica. - `gallica`: URL of the document on Gallica. - `page`: Document page number (this is the pagination of the digital file, not the one of the original document). - `texte`: Page text, as rendered by OCR. ### Data Splits The dataset contains a single "train" split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/legalcode). ### Citation Information ``` @dataset{langlais_pierre_carl_2021_4751204, author = {Langlais, Pierre-Carl}, title = {{Fictions littéraires de Gallica / Literary fictions of Gallica}}, month = apr, year = 2021, publisher = {Zenodo}, version = 1, doi = {10.5281/zenodo.4751204}, url = {https://doi.org/10.5281/zenodo.4751204} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
biglam/gallica_literary_fictions
[ "task_categories:text-generation", "task_ids:language-modeling", "multilinguality:monolingual", "source_datasets:original", "language:fr", "license:cc0-1.0", "region:us" ]
2022-09-19T12:17:09+00:00
{"language": "fr", "license": "cc0-1.0", "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Literary fictions of Gallica"}
2022-09-19T12:58:06+00:00
[]
[ "fr" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #source_datasets-original #language-French #license-cc0-1.0 #region-us
# Dataset Card for Literary fictions of Gallica ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary The collection "Fiction littéraire de Gallica" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…) This corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes. The extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of "novel" that is generally contemporary of the publication. A French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields - 'main_id': Unique identifier of the page of the roman. - 'catalogue_id': Identifier of the edition in the BNF catalogue. - 'titre': Title of the edition as it appears in the catalog. - 'nom_auteur': Author's name. - 'prenom_auteur': Author's first name. - 'date': Year of edition. - 'document_ocr': Estimated quality of ocerization for the whole document as a percentage of words probably well recognized (from 1-100). - 'date_enligne': Date of the online publishing of the digitization on Gallica. - 'gallica': URL of the document on Gallica. - 'page': Document page number (this is the pagination of the digital file, not the one of the original document). - 'texte': Page text, as rendered by OCR. ### Data Splits The dataset contains a single "train" split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Creative Commons Zero v1.0 Universal. ### Contributions Thanks to @albertvillanova for adding this dataset.
[ "# Dataset Card for Literary fictions of Gallica", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe collection \"Fiction littéraire de Gallica\" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…)\n\nThis corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes.\n\nThe extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of \"novel\" that is generally contemporary of the publication.\n\nA French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'main_id': Unique identifier of the page of the roman.\n- 'catalogue_id': Identifier of the edition in the BNF catalogue.\n- 'titre': Title of the edition as it appears in the catalog.\n- 'nom_auteur': Author's name.\n- 'prenom_auteur': Author's first name.\n- 'date': Year of edition.\n- 'document_ocr': Estimated quality of ocerization for the whole document as a percentage of words probably well recognized (from 1-100).\n- 'date_enligne': Date of the online publishing of the digitization on Gallica.\n- 'gallica': URL of the document on Gallica.\n- 'page': Document page number (this is the pagination of the digital file, not the one of the original document).\n- 'texte': Page text, as rendered by OCR.", "### Data Splits\n\nThe dataset contains a single \"train\" split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCreative Commons Zero v1.0 Universal.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #source_datasets-original #language-French #license-cc0-1.0 #region-us \n", "# Dataset Card for Literary fictions of Gallica", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe collection \"Fiction littéraire de Gallica\" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…)\n\nThis corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes.\n\nThe extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of \"novel\" that is generally contemporary of the publication.\n\nA French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'main_id': Unique identifier of the page of the roman.\n- 'catalogue_id': Identifier of the edition in the BNF catalogue.\n- 'titre': Title of the edition as it appears in the catalog.\n- 'nom_auteur': Author's name.\n- 'prenom_auteur': Author's first name.\n- 'date': Year of edition.\n- 'document_ocr': Estimated quality of ocerization for the whole document as a percentage of words probably well recognized (from 1-100).\n- 'date_enligne': Date of the online publishing of the digitization on Gallica.\n- 'gallica': URL of the document on Gallica.\n- 'page': Document page number (this is the pagination of the digital file, not the one of the original document).\n- 'texte': Page text, as rendered by OCR.", "### Data Splits\n\nThe dataset contains a single \"train\" split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCreative Commons Zero v1.0 Universal.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
559e6e78c86a66b7353e87f78b2eaf5b487e0744
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: morenolq/bart-base-xsum * Dataset: xsum * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model.
autoevaluate/autoeval-eval-xsum-default-d5c7a7-1507154810
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T12:37:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "morenolq/bart-base-xsum", "metrics": ["bertscore"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-19T12:45:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: morenolq/bart-base-xsum * Dataset: xsum * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @morenolq for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: morenolq/bart-base-xsum\n* Dataset: xsum\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: morenolq/bart-base-xsum\n* Dataset: xsum\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
8e4813d4198fd5da65377f6757b4a420c8a6eb5b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: navteca/roberta-large-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@tvdermeer](https://huggingface.co/tvdermeer) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-552ce2-1507654811
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T12:37:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "navteca/roberta-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-09-19T12:41:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: navteca/roberta-large-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @tvdermeer for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: navteca/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @tvdermeer for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: navteca/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @tvdermeer for evaluating this model." ]
76fb3cdf9ae1951b111ed14ef24d58d24c39d46c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: morenolq/distilbert-base-cased-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model.
autoevaluate/autoeval-eval-emotion-default-2be497-1508254837
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:17:15+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "morenolq/distilbert-base-cased-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-19T13:17:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: morenolq/distilbert-base-cased-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @morenolq for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: morenolq/distilbert-base-cased-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: morenolq/distilbert-base-cased-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
2c0ff370938b073a6e0e894789f0697c701e4f3d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: morenolq/distilbert-base-cased-emotion * Dataset: emotion * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@morenolq](https://huggingface.co/morenolq) for evaluating this model.
autoevaluate/autoeval-eval-emotion-default-f266e6-1508354838
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:17:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "morenolq/distilbert-base-cased-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-19T13:17:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: morenolq/distilbert-base-cased-emotion * Dataset: emotion * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @morenolq for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: morenolq/distilbert-base-cased-emotion\n* Dataset: emotion\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: morenolq/distilbert-base-cased-emotion\n* Dataset: emotion\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @morenolq for evaluating this model." ]
675263df9cdf386ecb16016c1434cf90108914d5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-rte-157f21-1508454839
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:17:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-rte", "metrics": [], "dataset_name": "glue", "dataset_config": "rte", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-09-19T13:17:54+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
a4302a5208a75bd5eafff39c433c0073cf7b649e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-qqp-b620ce-1508754840
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:17:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-qqp", "metrics": [], "dataset_name": "glue", "dataset_config": "qqp", "dataset_split": "validation", "col_mapping": {"text1": "question1", "text2": "question2", "target": "label"}}}
2022-09-19T13:20:34+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
e16f043921522ca6271d5174bfdc22889c7b446e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-mnli * Dataset: glue * Config: mnli_matched * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-mnli_matched-c9e0cb-1508854842
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:17:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-mnli", "metrics": [], "dataset_name": "glue", "dataset_config": "mnli_matched", "dataset_split": "validation", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-09-19T13:18:46+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-mnli * Dataset: glue * Config: mnli_matched * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-mnli\n* Dataset: glue\n* Config: mnli_matched\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-mnli\n* Dataset: glue\n* Config: mnli_matched\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
400174f5e633d5a97f599969362628c5b028794f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: JeremiahZ/roberta-base-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-cola-b911f0-1508954843
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:48:55+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "multi_class_classification", "model": "JeremiahZ/roberta-base-cola", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-09-19T13:49:27+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: JeremiahZ/roberta-base-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JeremiahZ/roberta-base-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JeremiahZ/roberta-base-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
9509b6529ed2a785841e86bf1637353291e8ddab
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: JeremiahZ/bert-base-uncased-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-cola-b911f0-1508954844
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:48:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "multi_class_classification", "model": "JeremiahZ/bert-base-uncased-cola", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-09-19T13:49:28+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: JeremiahZ/bert-base-uncased-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JeremiahZ/bert-base-uncased-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JeremiahZ/bert-base-uncased-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
5b7b1e9a55331e18543b14c0ba25aaf38985337a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054845
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:49:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/roberta-base-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-09-19T13:49:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
bf06c398b669a4cb58387c071e8e4bf84eefd64f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@JeremiahZ](https://huggingface.co/JeremiahZ) for evaluating this model.
autoevaluate/autoeval-eval-glue-mrpc-9038ab-1509054846
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T13:49:09+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/bert-base-uncased-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-09-19T13:49:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/bert-base-uncased-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @JeremiahZ for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/bert-base-uncased-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @JeremiahZ for evaluating this model." ]
e992f84dd6d471143439e0a111e3b9d73ebc5f3a
GAMa (Ground-video to Aerial-image Matching) dataset Download at: https://www.crcv.ucf.edu/data1/GAMa/ # GAMa: Cross-view Video Geo-localization by [Shruti Vyas](https://scholar.google.com/citations?user=15YqUQUAAAAJ&hl=en); [Chen Chen](https://scholar.google.com/citations?user=TuEwcZ0AAAAJ&hl=en); [Mubarak Shah](https://scholar.google.com/citations?user=p8gsO3gAAAAJ&hl=en) code at: https://github.com/svyas23/GAMa/blob/main/README.md
svyas23/GAMa
[ "license:other", "region:us" ]
2022-09-19T16:17:00+00:00
{"license": "other"}
2022-09-19T16:34:14+00:00
[]
[]
TAGS #license-other #region-us
GAMa (Ground-video to Aerial-image Matching) dataset Download at: URL # GAMa: Cross-view Video Geo-localization by Shruti Vyas; Chen Chen; Mubarak Shah code at: URL
[ "# GAMa: Cross-view Video Geo-localization \nby Shruti Vyas; Chen Chen; Mubarak Shah\n\ncode at: URL" ]
[ "TAGS\n#license-other #region-us \n", "# GAMa: Cross-view Video Geo-localization \nby Shruti Vyas; Chen Chen; Mubarak Shah\n\ncode at: URL" ]
7513b19b0b0283fcf2bf8e537f1fc6cba04250fe
# Dataset Card for G-KOMET ### Dataset Summary G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in [KOMET](https://huggingface.co/datasets/cjvt/komet), where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this. ### Supported Tasks and Leaderboards Metaphor detection, metonymy detection, metaphor type classification, metaphor frame classification. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'document_name': 'G-Komet001.xml', 'idx': 3, 'idx_paragraph': 0, 'idx_sentence': 3, 'sentence_words': ['no', 'zdaj', 'samo', 'še', 'za', 'eno', 'orientacijo'], 'met_type': [ {'type': 'MRWi', 'word_indices': [6]} ], 'met_frame': [ {'type': 'spatial_orientation', 'word_indices': [6]} ] } ``` The sentence comes from the document `G-Komet001.xml`, is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document. The word "orientacijo" is annotated as an indirect metaphor-related word (`MRWi`). It is also annotated with the frame "spatial_orientation". ### Data Fields - `document_name`: a string containing the name of the document in which the sentence appears; - `idx`: a uint32 containing the index of the sentence inside its document; - `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears; - `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph; containing the consecutive number of the paragraph inside the current news article; - `sentence_words`: words in the sentence; - `met_type`: metaphors in the sentence, marked by their type and word indices; - `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices. ## Dataset Creation The corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else. For more information, please check out the paper (which is in Slovenian language) or contact the dataset author. ## Additional Information ### Dataset Curators Špela Antloga. ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information ``` @InProceedings{antloga2022gkomet, title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET}, author={Antloga, \v{S}pela}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)}, year={2022}, pages={271-277} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/gkomet
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:sl", "license:cc-by-nc-sa-4.0", "metaphor-classification", "metonymy-classification", "metaphor-frame-classification", "multiword-expression-detection", "region:us" ]
2022-09-19T17:00:53+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": [], "pretty_name": "G-KOMET", "tags": ["metaphor-classification", "metonymy-classification", "metaphor-frame-classification", "multiword-expression-detection"]}
2022-11-27T16:40:19+00:00
[]
[ "sl" ]
TAGS #task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-Slovenian #license-cc-by-nc-sa-4.0 #metaphor-classification #metonymy-classification #metaphor-frame-classification #multiword-expression-detection #region-us
# Dataset Card for G-KOMET ### Dataset Summary G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in KOMET, where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this. ### Supported Tasks and Leaderboards Metaphor detection, metonymy detection, metaphor type classification, metaphor frame classification. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: The sentence comes from the document 'URL', is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document. The word "orientacijo" is annotated as an indirect metaphor-related word ('MRWi'). It is also annotated with the frame "spatial_orientation". ### Data Fields - 'document_name': a string containing the name of the document in which the sentence appears; - 'idx': a uint32 containing the index of the sentence inside its document; - 'idx_paragraph': a uint32 containing the index of the paragraph in which the sentence appears; - 'idx_sentence': a uint32 containing the index of the sentence inside its paragraph; containing the consecutive number of the paragraph inside the current news article; - 'sentence_words': words in the sentence; - 'met_type': metaphors in the sentence, marked by their type and word indices; - 'met_frame': metaphor frames in the sentence, marked by their type (frame name) and word indices. ## Dataset Creation The corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else. For more information, please check out the paper (which is in Slovenian language) or contact the dataset author. ## Additional Information ### Dataset Curators Špela Antloga. ### Licensing Information CC BY-NC-SA 4.0 ### Contributions Thanks to @matejklemen for adding this dataset.
[ "# Dataset Card for G-KOMET", "### Dataset Summary\n\nG-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.\n\nIt is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in KOMET, where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this.", "### Supported Tasks and Leaderboards\n\nMetaphor detection, metonymy detection, metaphor type classification, metaphor frame classification.", "### Languages\n\nSlovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset:\n\n\nThe sentence comes from the document 'URL', is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document.\nThe word \"orientacijo\" is annotated as an indirect metaphor-related word ('MRWi').\nIt is also annotated with the frame \"spatial_orientation\".", "### Data Fields\n\n- 'document_name': a string containing the name of the document in which the sentence appears; \n- 'idx': a uint32 containing the index of the sentence inside its document; \n- 'idx_paragraph': a uint32 containing the index of the paragraph in which the sentence appears;\n- 'idx_sentence': a uint32 containing the index of the sentence inside its paragraph;\ncontaining the consecutive number of the paragraph inside the current news article;\n- 'sentence_words': words in the sentence;\n- 'met_type': metaphors in the sentence, marked by their type and word indices;\n- 'met_frame': metaphor frames in the sentence, marked by their type (frame name) and word indices.", "## Dataset Creation\n\nThe corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else.\n\nFor more information, please check out the paper (which is in Slovenian language) or contact the dataset author.", "## Additional Information", "### Dataset Curators\n\nŠpela Antloga.", "### Licensing Information\n\nCC BY-NC-SA 4.0", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-Slovenian #license-cc-by-nc-sa-4.0 #metaphor-classification #metonymy-classification #metaphor-frame-classification #multiword-expression-detection #region-us \n", "# Dataset Card for G-KOMET", "### Dataset Summary\n\nG-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse.\n\nIt is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in KOMET, where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this.", "### Supported Tasks and Leaderboards\n\nMetaphor detection, metonymy detection, metaphor type classification, metaphor frame classification.", "### Languages\n\nSlovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset:\n\n\nThe sentence comes from the document 'URL', is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document.\nThe word \"orientacijo\" is annotated as an indirect metaphor-related word ('MRWi').\nIt is also annotated with the frame \"spatial_orientation\".", "### Data Fields\n\n- 'document_name': a string containing the name of the document in which the sentence appears; \n- 'idx': a uint32 containing the index of the sentence inside its document; \n- 'idx_paragraph': a uint32 containing the index of the paragraph in which the sentence appears;\n- 'idx_sentence': a uint32 containing the index of the sentence inside its paragraph;\ncontaining the consecutive number of the paragraph inside the current news article;\n- 'sentence_words': words in the sentence;\n- 'met_type': metaphors in the sentence, marked by their type and word indices;\n- 'met_frame': metaphor frames in the sentence, marked by their type (frame name) and word indices.", "## Dataset Creation\n\nThe corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else.\n\nFor more information, please check out the paper (which is in Slovenian language) or contact the dataset author.", "## Additional Information", "### Dataset Curators\n\nŠpela Antloga.", "### Licensing Information\n\nCC BY-NC-SA 4.0", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
67f7da031721a14cc391c7fa7c8d96411282d8a3
**(Jan. 8 2024) Test set labels are released** # Dataset Card for SLUE ## Table of Contents - [Dataset Card for SLUE](#dataset-card-for-slue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Automatic Speech Recognition (ASR)](#automatic-speech-recognition-asr) - [Named Entity Recognition (NER)](#named-entity-recognition-ner) - [Sentiment Analysis (SA)](#sentiment-analysis-sa) - [How-to-submit for your test set evaluation](#how-to-submit-for-your-test-set-evaluation) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [voxpopuli](#voxpopuli) - [voxceleb](#voxceleb) - [Data Fields](#data-fields) - [voxpopuli](#voxpopuli-1) - [voxceleb](#voxceleb-1) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [SLUE-VoxPopuli Dataset](#slue-voxpopuli-dataset) - [SLUE-VoxCeleb Dataset](#slue-voxceleb-dataset) - [Original License of OXFORD VGG VoxCeleb Dataset](#original-license-of-oxford-vgg-voxceleb-dataset) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://asappresearch.github.io/slue-toolkit](https://asappresearch.github.io/slue-toolkit) - **Repository:** [https://github.com/asappresearch/slue-toolkit/](https://github.com/asappresearch/slue-toolkit/) - **Paper:** [https://arxiv.org/pdf/2111.10367.pdf](https://arxiv.org/pdf/2111.10367.pdf) - **Leaderboard:** [https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html](https://asappresearch.github.io/slue-toolkit/leaderboard_v0.2.html) - **Size of downloaded dataset files:** 1.95 GB - **Size of the generated dataset:** 9.59 MB - **Total amount of disk used:** 1.95 GB ### Dataset Summary We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to - Track research progress on multiple SLU tasks - Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks - Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use. For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to [Toolkit](https://github.com/asappresearch/slue-toolkit) and [Paper](https://arxiv.org/pdf/2111.10367.pdf) for more details. ### Supported Tasks and Leaderboards #### Automatic Speech Recognition (ASR) Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER). #### Named Entity Recognition (NER) Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1. #### Sentiment Analysis (SA) Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.[More Information Needed] #### How-to-submit for your test set evaluation See here https://asappresearch.github.io/slue-toolkit/how-to-submit.html ### Languages The language data in SLUE is in English. ## Dataset Structure ### Data Instances #### voxpopuli - **Size of downloaded dataset files:** 398.45 MB - **Size of the generated dataset:** 5.81 MB - **Total amount of disk used:** 404.26 MB An example of 'train' looks as follows. ``` {'id': '20131007-0900-PLENARY-19-en_20131007-21:26:04_3', 'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/e35757b0971ac7ff5e2fcdc301bba0364857044be55481656e2ade6f7e1fd372/slue-voxpopuli/fine-tune/20131007-0900-PLENARY-19-en_20131007-21:26:04_3.ogg', 'array': array([ 0.00132601, 0.00058881, -0.00052187, ..., 0.06857217, 0.07835515, 0.07845446], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': 'None', 'normalized_text': 'two thousand and twelve for instance the new brussels i regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in europe even if the employer is domiciled outside europe. the commission will', 'raw_text': '2012. For instance, the new Brussels I Regulation provides for the right for employees to sue several employers together and the right for employees to have access to courts in Europe, even if the employer is domiciled outside Europe. The Commission will', 'raw_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'], 'start': [227, 177, 28, 0], 'length': [6, 6, 21, 4]}, 'normalized_ner': {'type': ['LOC', 'LOC', 'LAW', 'DATE'], 'start': [243, 194, 45, 0], 'length': [6, 6, 21, 23]}, 'raw_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'], 'start': [227, 177, 28, 0], 'length': [6, 6, 21, 4]}, 'normalized_combined_ner': {'type': ['PLACE', 'PLACE', 'LAW', 'WHEN'], 'start': [243, 194, 45, 0], 'length': [6, 6, 21, 23]}} ``` #### voxceleb - **Size of downloaded dataset files:** 1.55 GB - **Size of the generated dataset:** 3.78 MB - **Total amount of disk used:** 1.55 GB An example of 'train' looks as follows. ``` {'id': 'id10059_229vKIGbxrI_00004', 'audio': {'path': '/Users/felixwu/.cache/huggingface/datasets/downloads/extracted/400facb6d2f2496ebcd58a5ffe5fbf2798f363d1b719b888d28a29b872751626/slue-voxceleb/fine-tune_raw/id10059_229vKIGbxrI_00004.flac', 'array': array([-0.00442505, -0.00204468, 0.00628662, ..., 0.00158691, 0.00100708, 0.00033569], dtype=float32), 'sampling_rate': 16000}, 'speaker_id': 'id10059', 'normalized_text': 'of god what is a creator the almighty that uh', 'sentiment': 'Neutral', 'start_second': 0.45, 'end_second': 4.52} ``` ### Data Fields #### voxpopuli - `id`: a `string` id of an instance. - `audio`: audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `speaker_id`: a `string` of the speaker id. - `raw_text`: a `string` feature that contains the raw transcription of the audio. - `normalized_text`: a `string` feature that contains the normalized transcription of the audio which is **used in the standard evaluation**. - `raw_ner`: the NER annotation of the `raw_text` using the same 18 NER classes as OntoNotes. - `normalized_ner`: the NER annotation of the `normalized_text` using the same 18 NER classes as OntoNotes. - `raw_combined_ner`: the NER annotation of the `raw_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`). - `normalized_combined_ner`: the NER annotation of the `normalized_text` using our 7 NER classes (`WHEN`, `QUANT`, `PLACE`, `NORP`, `ORG`, `LAW`, `PERSON`) which is **used in the standard evaluation**. Each NER annotation is a dictionary containing three lists: `type`, `start`, and `length`. `type` is a list of the NER tag types. `start` is a list of the start character position of each named entity in the corresponding text. `length` is a list of the number of characters of each named entity. #### voxceleb - `id`: a `string` id of an instance. - `audio`: audio feature of the raw audio. Please use `start_second` and `end_second` to crop the transcribed segment. For example, `dataset[0]["audio"]["array"][int(dataset[0]["start_second"] * dataset[0]["audio"]["sample_rate"]):int(dataset[0]["end_second"] * dataset[0]["audio"]["sample_rate"])]`. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - `speaker_id`: a `string` of the speaker id. - `normalized_text`: a `string` feature that contains the transcription of the audio segment. - `sentiment`: a `string` feature which can be `Negative`, `Neutral`, or `Positive`. - `start_second`: a `float` feature that specifies the start second of the audio segment. - `end_second`: a `float` feature that specifies the end second of the audio segment. ### Data Splits | |train|validation|test| |---------|----:|---------:|---:| |voxpopuli| 5000| 1753|1842| |voxceleb | 5777| 1454|3553| Here we use the standard split names in Huggingface's datasets, so the `train` and `validation` splits are the original `fine-tune` and `dev` splits of SLUE datasets, respectively. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information #### SLUE-VoxPopuli Dataset SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (https://www.europarl.europa.eu/legal-notice/en/) Additionally, we provide named entity annotation (normalized_ner and raw_ner column in .tsv files) and it is covered with the same license as CC0. #### SLUE-VoxCeleb Dataset SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset. ##### Original License of OXFORD VGG VoxCeleb Dataset VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube. The speakers span a wide range of different ethnicities, accents, professions and ages. We provide Youtube URLs, associated face detections, and timestamps, as well as cropped audio segments and cropped face videos from the dataset. The copyright of both the original and cropped versions of the videos remains with the original owners. The data is covered under a Creative Commons Attribution 4.0 International license (Please read the license terms here. https://creativecommons.org/licenses/by/4.0/). Downloading this dataset implies agreement to follow the same conditions for any modification and/or re-distribution of the dataset in any form. Additionally any entity using this dataset agrees to the following conditions: THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Please cite [1,2] below if you make use of the dataset. [1] J. S. Chung, A. Nagrani, A. Zisserman VoxCeleb2: Deep Speaker Recognition INTERSPEECH, 2018. [2] A. Nagrani, J. S. Chung, A. Zisserman VoxCeleb: a large-scale speaker identification dataset INTERSPEECH, 2017 ### Citation Information ``` @inproceedings{shon2022slue, title={Slue: New benchmark tasks for spoken language understanding evaluation on natural speech}, author={Shon, Suwon and Pasad, Ankita and Wu, Felix and Brusco, Pablo and Artzi, Yoav and Livescu, Karen and Han, Kyu J}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7927--7931}, year={2022}, organization={IEEE} } ``` ### Contributions Thanks to [@fwu-asapp](https://github.com/fwu-asapp) for adding this dataset.
asapp/slue
[ "task_categories:automatic-speech-recognition", "task_categories:audio-classification", "task_categories:text-classification", "task_categories:token-classification", "task_ids:sentiment-analysis", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc0-1.0", "license:cc-by-4.0", "arxiv:2111.10367", "region:us" ]
2022-09-19T17:07:59+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0", "cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition", "audio-classification", "text-classification", "token-classification"], "task_ids": ["sentiment-analysis", "named-entity-recognition"], "paperswithcode_id": "slue", "pretty_name": "SLUE (Spoken Language Understanding Evaluation benchmark)", "tags": [], "configs": [{"config_name": "voxceleb", "data_files": [{"split": "train", "path": "voxceleb/train-*"}, {"split": "validation", "path": "voxceleb/validation-*"}, {"split": "test", "path": "voxceleb/test-*"}]}, {"config_name": "voxpopuli", "data_files": [{"split": "train", "path": "voxpopuli/train-*"}, {"split": "validation", "path": "voxpopuli/validation-*"}, {"split": "test", "path": "voxpopuli/test-*"}]}], "dataset_info": [{"config_name": "voxceleb", "features": [{"name": "index", "dtype": "int32"}, {"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "speaker_id", "dtype": "string"}, {"name": "normalized_text", "dtype": "string"}, {"name": "sentiment", "dtype": "string"}, {"name": "start_second", "dtype": "float64"}, {"name": "end_second", "dtype": "float64"}, {"name": "local_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 875444694.0, "num_examples": 5777}, {"name": "validation", "num_bytes": 213065127.0, "num_examples": 1454}, {"name": "test", "num_bytes": 545473843.0, "num_examples": 3553}], "download_size": 1563299519, "dataset_size": 1633983664.0}, {"config_name": "voxpopuli", "features": [{"name": "index", "dtype": "int32"}, {"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "speaker_id", "dtype": "string"}, {"name": "normalized_text", "dtype": "string"}, {"name": "raw_text", "dtype": "string"}, {"name": "raw_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "normalized_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "raw_combined_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "normalized_combined_ner", "sequence": [{"name": "type", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "length", "dtype": "int32"}]}, {"name": "local_path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 240725040.0, "num_examples": 5000}, {"name": "validation", "num_bytes": 83155577.099, "num_examples": 1753}, {"name": "test", "num_bytes": 83518039.328, "num_examples": 1842}], "download_size": 404062275, "dataset_size": 407398656.427}]}
2024-01-12T05:15:39+00:00
[ "2111.10367" ]
[ "en" ]
TAGS #task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-text-classification #task_categories-token-classification #task_ids-sentiment-analysis #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc0-1.0 #license-cc-by-4.0 #arxiv-2111.10367 #region-us
(Jan. 8 2024) Test set labels are released Dataset Card for SLUE ===================== Table of Contents ----------------- * Dataset Card for SLUE + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards * Automatic Speech Recognition (ASR) * Named Entity Recognition (NER) * Sentiment Analysis (SA) * How-to-submit for your test set evaluation - Languages + Dataset Structure - Data Instances * voxpopuli * voxceleb - Data Fields * voxpopuli * voxceleb - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information * SLUE-VoxPopuli Dataset * SLUE-VoxCeleb Dataset + Original License of OXFORD VGG VoxCeleb Dataset - Citation Information - Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Size of downloaded dataset files: 1.95 GB * Size of the generated dataset: 9.59 MB * Total amount of disk used: 1.95 GB ### Dataset Summary We introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to * Track research progress on multiple SLU tasks * Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks * Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use. For this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to Toolkit and Paper for more details. ### Supported Tasks and Leaderboards #### Automatic Speech Recognition (ASR) Although this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER). #### Named Entity Recognition (NER) Named entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1. #### Sentiment Analysis (SA) Sentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores. #### How-to-submit for your test set evaluation See here URL ### Languages The language data in SLUE is in English. Dataset Structure ----------------- ### Data Instances #### voxpopuli * Size of downloaded dataset files: 398.45 MB * Size of the generated dataset: 5.81 MB * Total amount of disk used: 404.26 MB An example of 'train' looks as follows. #### voxceleb * Size of downloaded dataset files: 1.55 GB * Size of the generated dataset: 3.78 MB * Total amount of disk used: 1.55 GB An example of 'train' looks as follows. ### Data Fields #### voxpopuli * 'id': a 'string' id of an instance. * 'audio': audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * 'speaker\_id': a 'string' of the speaker id. * 'raw\_text': a 'string' feature that contains the raw transcription of the audio. * 'normalized\_text': a 'string' feature that contains the normalized transcription of the audio which is used in the standard evaluation. * 'raw\_ner': the NER annotation of the 'raw\_text' using the same 18 NER classes as OntoNotes. * 'normalized\_ner': the NER annotation of the 'normalized\_text' using the same 18 NER classes as OntoNotes. * 'raw\_combined\_ner': the NER annotation of the 'raw\_text' using our 7 NER classes ('WHEN', 'QUANT', 'PLACE', 'NORP', 'ORG', 'LAW', 'PERSON'). * 'normalized\_combined\_ner': the NER annotation of the 'normalized\_text' using our 7 NER classes ('WHEN', 'QUANT', 'PLACE', 'NORP', 'ORG', 'LAW', 'PERSON') which is used in the standard evaluation. Each NER annotation is a dictionary containing three lists: 'type', 'start', and 'length'. 'type' is a list of the NER tag types. 'start' is a list of the start character position of each named entity in the corresponding text. 'length' is a list of the number of characters of each named entity. #### voxceleb * 'id': a 'string' id of an instance. * 'audio': audio feature of the raw audio. Please use 'start\_second' and 'end\_second' to crop the transcribed segment. For example, 'dataset[0]["audio"]["array"][int(dataset[0]["start\_second"] \* dataset[0]["audio"]["sample\_rate"]):int(dataset[0]["end\_second"] \* dataset[0]["audio"]["sample\_rate"])]'. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * 'speaker\_id': a 'string' of the speaker id. * 'normalized\_text': a 'string' feature that contains the transcription of the audio segment. * 'sentiment': a 'string' feature which can be 'Negative', 'Neutral', or 'Positive'. * 'start\_second': a 'float' feature that specifies the start second of the audio segment. * 'end\_second': a 'float' feature that specifies the end second of the audio segment. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information #### SLUE-VoxPopuli Dataset SLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (URL Additionally, we provide named entity annotation (normalized\_ner and raw\_ner column in .tsv files) and it is covered with the same license as CC0. #### SLUE-VoxCeleb Dataset SLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset. ##### Original License of OXFORD VGG VoxCeleb Dataset VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. VoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube. The speakers span a wide range of different ethnicities, accents, professions and ages. We provide Youtube URLs, associated face detections, and timestamps, as well as cropped audio segments and cropped face videos from the dataset. The copyright of both the original and cropped versions of the videos remains with the original owners. The data is covered under a Creative Commons Attribution 4.0 International license (Please read the license terms here. URL Downloading this dataset implies agreement to follow the same conditions for any modification and/or re-distribution of the dataset in any form. Additionally any entity using this dataset agrees to the following conditions: THIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Please cite [1,2] below if you make use of the dataset. [1] J. S. Chung, A. Nagrani, A. Zisserman VoxCeleb2: Deep Speaker Recognition INTERSPEECH, 2018. [2] A. Nagrani, J. S. Chung, A. Zisserman VoxCeleb: a large-scale speaker identification dataset INTERSPEECH, 2017 ### Contributions Thanks to @fwu-asapp for adding this dataset.
[ "### Dataset Summary\n\n\nWe introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to\n\n\n* Track research progress on multiple SLU tasks\n* Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks\n* Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.\n\n\nFor this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to Toolkit and Paper for more details.", "### Supported Tasks and Leaderboards", "#### Automatic Speech Recognition (ASR)\n\n\nAlthough this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).", "#### Named Entity Recognition (NER)\n\n\nNamed entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.", "#### Sentiment Analysis (SA)\n\n\nSentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.", "#### How-to-submit for your test set evaluation\n\n\nSee here URL", "### Languages\n\n\nThe language data in SLUE is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### voxpopuli\n\n\n* Size of downloaded dataset files: 398.45 MB\n* Size of the generated dataset: 5.81 MB\n* Total amount of disk used: 404.26 MB\nAn example of 'train' looks as follows.", "#### voxceleb\n\n\n* Size of downloaded dataset files: 1.55 GB\n* Size of the generated dataset: 3.78 MB\n* Total amount of disk used: 1.55 GB\nAn example of 'train' looks as follows.", "### Data Fields", "#### voxpopuli\n\n\n* 'id': a 'string' id of an instance.\n* 'audio': audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'speaker\\_id': a 'string' of the speaker id.\n* 'raw\\_text': a 'string' feature that contains the raw transcription of the audio.\n* 'normalized\\_text': a 'string' feature that contains the normalized transcription of the audio which is used in the standard evaluation.\n* 'raw\\_ner': the NER annotation of the 'raw\\_text' using the same 18 NER classes as OntoNotes.\n* 'normalized\\_ner': the NER annotation of the 'normalized\\_text' using the same 18 NER classes as OntoNotes.\n* 'raw\\_combined\\_ner': the NER annotation of the 'raw\\_text' using our 7 NER classes ('WHEN', 'QUANT', 'PLACE', 'NORP', 'ORG', 'LAW', 'PERSON').\n* 'normalized\\_combined\\_ner': the NER annotation of the 'normalized\\_text' using our 7 NER classes ('WHEN', 'QUANT', 'PLACE', 'NORP', 'ORG', 'LAW', 'PERSON') which is used in the standard evaluation.\nEach NER annotation is a dictionary containing three lists: 'type', 'start', and 'length'. 'type' is a list of the NER tag types. 'start' is a list of the start character position of each named entity in the corresponding text. 'length' is a list of the number of characters of each named entity.", "#### voxceleb\n\n\n* 'id': a 'string' id of an instance.\n* 'audio': audio feature of the raw audio. Please use 'start\\_second' and 'end\\_second' to crop the transcribed segment. For example, 'dataset[0][\"audio\"][\"array\"][int(dataset[0][\"start\\_second\"] \\* dataset[0][\"audio\"][\"sample\\_rate\"]):int(dataset[0][\"end\\_second\"] \\* dataset[0][\"audio\"][\"sample\\_rate\"])]'. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'speaker\\_id': a 'string' of the speaker id.\n* 'normalized\\_text': a 'string' feature that contains the transcription of the audio segment.\n* 'sentiment': a 'string' feature which can be 'Negative', 'Neutral', or 'Positive'.\n* 'start\\_second': a 'float' feature that specifies the start second of the audio segment.\n* 'end\\_second': a 'float' feature that specifies the end second of the audio segment.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "#### SLUE-VoxPopuli Dataset\n\n\nSLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (URL\n\n\nAdditionally, we provide named entity annotation (normalized\\_ner and raw\\_ner column in .tsv files) and it is covered with the same license as CC0.", "#### SLUE-VoxCeleb Dataset\n\n\nSLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset.", "##### Original License of OXFORD VGG VoxCeleb Dataset\n\n\nVoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. \n\nVoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.\n\n\nThe speakers span a wide range of different ethnicities, accents, professions and ages.\n\n\nWe provide Youtube URLs, associated face detections, and timestamps, as\nwell as cropped audio segments and cropped face videos from the\ndataset. The copyright of both the original and cropped versions\nof the videos remains with the original owners.\n\n\nThe data is covered under a Creative Commons\nAttribution 4.0 International license (Please read the\nlicense terms here. URL\n\n\nDownloading this dataset implies agreement to follow the same\nconditions for any modification and/or\nre-distribution of the dataset in any form.\n\n\nAdditionally any entity using this dataset agrees to the following conditions:\n\n\nTHIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS\nIS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED\nTO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\nPARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nHOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\nEXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\nPROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nPlease cite [1,2] below if you make use of the dataset.\n\n\n[1] J. S. Chung, A. Nagrani, A. Zisserman \n\nVoxCeleb2: Deep Speaker Recognition \n\nINTERSPEECH, 2018.\n\n\n[2] A. Nagrani, J. S. Chung, A. Zisserman\nVoxCeleb: a large-scale speaker identification dataset \n\nINTERSPEECH, 2017", "### Contributions\n\n\nThanks to @fwu-asapp for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-text-classification #task_categories-token-classification #task_ids-sentiment-analysis #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc0-1.0 #license-cc-by-4.0 #arxiv-2111.10367 #region-us \n", "### Dataset Summary\n\n\nWe introduce the Spoken Language Understanding Evaluation (SLUE) benchmark. The goals of our work are to\n\n\n* Track research progress on multiple SLU tasks\n* Facilitate the development of pre-trained representations by providing fine-tuning and eval sets for a variety of SLU tasks\n* Foster the open exchange of research by focusing on freely available datasets that all academic and industrial groups can easily use.\n\n\nFor this benchmark, we provide new annotation of publicly available, natural speech data for training and evaluation. We also provide a benchmark suite including code to download and pre-process the SLUE datasets, train the baseline models, and evaluate performance on SLUE tasks. Refer to Toolkit and Paper for more details.", "### Supported Tasks and Leaderboards", "#### Automatic Speech Recognition (ASR)\n\n\nAlthough this is not a SLU task, ASR can help analyze the performance of downstream SLU tasks on the same domain. Additionally, pipeline approaches depend on ASR outputs, making ASR relevant to SLU. ASR is evaluated using word error rate (WER).", "#### Named Entity Recognition (NER)\n\n\nNamed entity recognition involves detecting the named entities and their tags (types) in a given sentence. We evaluate performance using micro-averaged F1 and label-F1 scores. The F1 score evaluates an unordered list of named entity phrase and tag pairs predicted for each sentence. Only the tag predictions are considered for label-F1.", "#### Sentiment Analysis (SA)\n\n\nSentiment analysis refers to classifying a given speech segment as having negative, neutral, or positive sentiment. We evaluate SA using macro-averaged (unweighted) recall and F1 scores.", "#### How-to-submit for your test set evaluation\n\n\nSee here URL", "### Languages\n\n\nThe language data in SLUE is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### voxpopuli\n\n\n* Size of downloaded dataset files: 398.45 MB\n* Size of the generated dataset: 5.81 MB\n* Total amount of disk used: 404.26 MB\nAn example of 'train' looks as follows.", "#### voxceleb\n\n\n* Size of downloaded dataset files: 1.55 GB\n* Size of the generated dataset: 3.78 MB\n* Total amount of disk used: 1.55 GB\nAn example of 'train' looks as follows.", "### Data Fields", "#### voxpopuli\n\n\n* 'id': a 'string' id of an instance.\n* 'audio': audio feature of the raw audio. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'speaker\\_id': a 'string' of the speaker id.\n* 'raw\\_text': a 'string' feature that contains the raw transcription of the audio.\n* 'normalized\\_text': a 'string' feature that contains the normalized transcription of the audio which is used in the standard evaluation.\n* 'raw\\_ner': the NER annotation of the 'raw\\_text' using the same 18 NER classes as OntoNotes.\n* 'normalized\\_ner': the NER annotation of the 'normalized\\_text' using the same 18 NER classes as OntoNotes.\n* 'raw\\_combined\\_ner': the NER annotation of the 'raw\\_text' using our 7 NER classes ('WHEN', 'QUANT', 'PLACE', 'NORP', 'ORG', 'LAW', 'PERSON').\n* 'normalized\\_combined\\_ner': the NER annotation of the 'normalized\\_text' using our 7 NER classes ('WHEN', 'QUANT', 'PLACE', 'NORP', 'ORG', 'LAW', 'PERSON') which is used in the standard evaluation.\nEach NER annotation is a dictionary containing three lists: 'type', 'start', and 'length'. 'type' is a list of the NER tag types. 'start' is a list of the start character position of each named entity in the corresponding text. 'length' is a list of the number of characters of each named entity.", "#### voxceleb\n\n\n* 'id': a 'string' id of an instance.\n* 'audio': audio feature of the raw audio. Please use 'start\\_second' and 'end\\_second' to crop the transcribed segment. For example, 'dataset[0][\"audio\"][\"array\"][int(dataset[0][\"start\\_second\"] \\* dataset[0][\"audio\"][\"sample\\_rate\"]):int(dataset[0][\"end\\_second\"] \\* dataset[0][\"audio\"][\"sample\\_rate\"])]'. It is a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* 'speaker\\_id': a 'string' of the speaker id.\n* 'normalized\\_text': a 'string' feature that contains the transcription of the audio segment.\n* 'sentiment': a 'string' feature which can be 'Negative', 'Neutral', or 'Positive'.\n* 'start\\_second': a 'float' feature that specifies the start second of the audio segment.\n* 'end\\_second': a 'float' feature that specifies the end second of the audio segment.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "#### SLUE-VoxPopuli Dataset\n\n\nSLUE-VoxPopuli dataset contains a subset of VoxPopuli dataset and the copyright of this subset remains the same with the original license, CC0. See also European Parliament's legal notice (URL\n\n\nAdditionally, we provide named entity annotation (normalized\\_ner and raw\\_ner column in .tsv files) and it is covered with the same license as CC0.", "#### SLUE-VoxCeleb Dataset\n\n\nSLUE-VoxCeleb Dataset contains a subset of OXFORD VoxCeleb dataset and the copyright of this subset remains the same Creative Commons Attribution 4.0 International license as below. Additionally, we provide transcription, sentiment annotation and timestamp (start, end) that follows the same license to OXFORD VoxCeleb dataset.", "##### Original License of OXFORD VGG VoxCeleb Dataset\n\n\nVoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube. \n\nVoxCeleb2 contains over a million utterances for 6,112 celebrities, extracted from videos uploaded to YouTube.\n\n\nThe speakers span a wide range of different ethnicities, accents, professions and ages.\n\n\nWe provide Youtube URLs, associated face detections, and timestamps, as\nwell as cropped audio segments and cropped face videos from the\ndataset. The copyright of both the original and cropped versions\nof the videos remains with the original owners.\n\n\nThe data is covered under a Creative Commons\nAttribution 4.0 International license (Please read the\nlicense terms here. URL\n\n\nDownloading this dataset implies agreement to follow the same\nconditions for any modification and/or\nre-distribution of the dataset in any form.\n\n\nAdditionally any entity using this dataset agrees to the following conditions:\n\n\nTHIS DATASET IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS\nIS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED\nTO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\nPARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\nHOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\nEXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\nPROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\nNEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nPlease cite [1,2] below if you make use of the dataset.\n\n\n[1] J. S. Chung, A. Nagrani, A. Zisserman \n\nVoxCeleb2: Deep Speaker Recognition \n\nINTERSPEECH, 2018.\n\n\n[2] A. Nagrani, J. S. Chung, A. Zisserman\nVoxCeleb: a large-scale speaker identification dataset \n\nINTERSPEECH, 2017", "### Contributions\n\n\nThanks to @fwu-asapp for adding this dataset." ]
ff88393aa85808a6172b21e19e27a40ab882a734
Initial annotated dataset derived from `ImageIN/IA_unlabelled`
ImageIN/ImageIn_annotations
[ "task_categories:image-classification", "region:us" ]
2022-09-19T17:16:25+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "ImageIn hand labelled", "tags": []}
2022-09-26T11:20:03+00:00
[]
[]
TAGS #task_categories-image-classification #region-us
Initial annotated dataset derived from 'ImageIN/IA_unlabelled'
[]
[ "TAGS\n#task_categories-image-classification #region-us \n" ]
c76f26430961c9cb3dd896809d3b303225bd6003
A piece of Federico García Lorca's body of work.
smkerr/lorca
[ "region:us" ]
2022-09-19T19:00:37+00:00
{}
2022-09-19T19:02:06+00:00
[]
[]
TAGS #region-us
A piece of Federico García Lorca's body of work.
[]
[ "TAGS\n#region-us \n" ]
9fbd8304e81d1eadc8eda9738dec458621f25f79
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-30b-copy * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-autoevaluate__zero-shot-classification-sample-autoevalu-1f3143-1511754885
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T19:30:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-30b-copy", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-19T20:08:28+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-30b-copy * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-30b-copy\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-30b-copy\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
084060f16b46f3165318f760b2339208b19a0bde
# Dataset Card for ASQA ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/google-research/language/tree/master/language/asqa - **Paper:** https://arxiv.org/abs/2204.06092 - **Leaderboard:** https://ambigqa.github.io/asqa_leaderboard.html ### Dataset Summary ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments. ### Supported Tasks and Leaderboards Long-form Question Answering. [Leaderboard](https://ambigqa.github.io/asqa_leaderboard.html) ### Languages - English ## Dataset Structure ### Data Instances ```py { "ambiguous_question": "Where does the civil liberties act place the blame for the internment of u.s. citizens?", "qa_pairs": [ { "context": "No context provided", "question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by apologizing on behalf of them?", "short_answers": [ "the people of the United States" ], "wikipage": None }, { "context": "No context provided", "question": "Where does the civil liberties act place the blame for the internment of u.s. citizens by making them pay reparations?", "short_answers": [ "United States government" ], "wikipage": None } ], "wikipages": [ { "title": "Civil Liberties Act of 1988", "url": "https://en.wikipedia.org/wiki/Civil%20Liberties%20Act%20of%201988" } ], "annotations": [ { "knowledge": [ { "content": "The Civil Liberties Act of 1988 (Pub.L. 100–383, title I, August 10, 1988, 102 Stat. 904, 50a U.S.C. § 1989b et seq.) is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II.", "wikipage": "Civil Liberties Act of 1988" } ], "long_answer": "The Civil Liberties Act of 1988 is a United States federal law that granted reparations to Japanese Americans who had been interned by the United States government during World War II. In the act, the blame for the internment of U.S. citizens was placed on the people of the United States, by apologizing on behalf of them. Furthermore, the blame for the internment was placed on the United States government, by making them pay reparations." } ], "sample_id": -4557617869928758000 } ``` ### Data Fields - `ambiguous_question`: ambiguous question from AmbigQA. - `annotations`: long-form answers to the ambiguous question constructed by ASQA annotators. - `annotations/knowledge`: list of additional knowledge pieces. - `annotations/knowledge/content`: a passage from Wikipedia. - `annotations/knowledge/wikipage`: title of the Wikipedia page the passage was taken from. - `annotations/long_answer`: annotation. - `qa_pairs`: Q&A pairs from AmbigQA which are used for disambiguation. - `qa_pairs/context`: additional context provided. - `qa_pairs/question`: disambiguated question from AmbigQA. - `qa_pairs/short_answers`: list of short answers from AmbigQA. - `qa_pairs/wikipage`: title of the Wikipedia page the additional context was taken from. - `sample_id`: the unique id of the sample - `wikipages`: list of Wikipedia pages visited by AmbigQA annotators. - `wikipages/title`: title of the Wikipedia page. - `wikipages/url`: link to the Wikipedia page. ### Data Splits | **Split** | **Instances** | |-----------|---------------| | Train | 4353 | | Dev | 948 | ## Additional Information ### Contributions Thanks to [@din0s](https://github.com/din0s) for adding this dataset.
din0s/asqa
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|ambig_qa", "language:en", "license:apache-2.0", "factoid questions", "long-form answers", "arxiv:2204.06092", "region:us" ]
2022-09-19T21:25:51+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|ambig_qa"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "pretty_name": "ASQA", "tags": ["factoid questions", "long-form answers"]}
2022-09-20T15:14:54+00:00
[ "2204.06092" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|ambig_qa #language-English #license-apache-2.0 #factoid questions #long-form answers #arxiv-2204.06092 #region-us
Dataset Card for ASQA ===================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Additional Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: URL * Leaderboard: URL ### Dataset Summary ASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments. ### Supported Tasks and Leaderboards Long-form Question Answering. Leaderboard ### Languages * English Dataset Structure ----------------- ### Data Instances ### Data Fields * 'ambiguous\_question': ambiguous question from AmbigQA. * 'annotations': long-form answers to the ambiguous question constructed by ASQA annotators. * 'annotations/knowledge': list of additional knowledge pieces. * 'annotations/knowledge/content': a passage from Wikipedia. * 'annotations/knowledge/wikipage': title of the Wikipedia page the passage was taken from. * 'annotations/long\_answer': annotation. * 'qa\_pairs': Q&A pairs from AmbigQA which are used for disambiguation. * 'qa\_pairs/context': additional context provided. * 'qa\_pairs/question': disambiguated question from AmbigQA. * 'qa\_pairs/short\_answers': list of short answers from AmbigQA. * 'qa\_pairs/wikipage': title of the Wikipedia page the additional context was taken from. * 'sample\_id': the unique id of the sample * 'wikipages': list of Wikipedia pages visited by AmbigQA annotators. * 'wikipages/title': title of the Wikipedia page. * 'wikipages/url': link to the Wikipedia page. ### Data Splits Additional Information ---------------------- ### Contributions Thanks to @din0s for adding this dataset.
[ "### Dataset Summary\n\n\nASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.", "### Supported Tasks and Leaderboards\n\n\nLong-form Question Answering. Leaderboard", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'ambiguous\\_question': ambiguous question from AmbigQA.\n* 'annotations': long-form answers to the ambiguous question constructed by ASQA annotators.\n* 'annotations/knowledge': list of additional knowledge pieces.\n* 'annotations/knowledge/content': a passage from Wikipedia.\n* 'annotations/knowledge/wikipage': title of the Wikipedia page the passage was taken from.\n* 'annotations/long\\_answer': annotation.\n* 'qa\\_pairs': Q&A pairs from AmbigQA which are used for disambiguation.\n* 'qa\\_pairs/context': additional context provided.\n* 'qa\\_pairs/question': disambiguated question from AmbigQA.\n* 'qa\\_pairs/short\\_answers': list of short answers from AmbigQA.\n* 'qa\\_pairs/wikipage': title of the Wikipedia page the additional context was taken from.\n* 'sample\\_id': the unique id of the sample\n* 'wikipages': list of Wikipedia pages visited by AmbigQA annotators.\n* 'wikipages/title': title of the Wikipedia page.\n* 'wikipages/url': link to the Wikipedia page.", "### Data Splits\n\n\n\nAdditional Information\n----------------------", "### Contributions\n\n\nThanks to @din0s for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|ambig_qa #language-English #license-apache-2.0 #factoid questions #long-form answers #arxiv-2204.06092 #region-us \n", "### Dataset Summary\n\n\nASQA is the first long-form question answering dataset that focuses on ambiguous factoid questions. Different from previous long-form answers datasets, each question is annotated with both long-form answers and extractive question-answer pairs, which should be answerable by the generated passage. A generated long-form answer will be evaluated using both ROUGE and QA accuracy. In the paper, we show that these evaluation metrics are well-correlated with human judgments.", "### Supported Tasks and Leaderboards\n\n\nLong-form Question Answering. Leaderboard", "### Languages\n\n\n* English\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'ambiguous\\_question': ambiguous question from AmbigQA.\n* 'annotations': long-form answers to the ambiguous question constructed by ASQA annotators.\n* 'annotations/knowledge': list of additional knowledge pieces.\n* 'annotations/knowledge/content': a passage from Wikipedia.\n* 'annotations/knowledge/wikipage': title of the Wikipedia page the passage was taken from.\n* 'annotations/long\\_answer': annotation.\n* 'qa\\_pairs': Q&A pairs from AmbigQA which are used for disambiguation.\n* 'qa\\_pairs/context': additional context provided.\n* 'qa\\_pairs/question': disambiguated question from AmbigQA.\n* 'qa\\_pairs/short\\_answers': list of short answers from AmbigQA.\n* 'qa\\_pairs/wikipage': title of the Wikipedia page the additional context was taken from.\n* 'sample\\_id': the unique id of the sample\n* 'wikipages': list of Wikipedia pages visited by AmbigQA annotators.\n* 'wikipages/title': title of the Wikipedia page.\n* 'wikipages/url': link to the Wikipedia page.", "### Data Splits\n\n\n\nAdditional Information\n----------------------", "### Contributions\n\n\nThanks to @din0s for adding this dataset." ]
c5a4721b5d4ff814a1af2020df60566a313ea67b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-30b-copy * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model.
autoevaluate/autoeval-eval-Tristan__zero-shot-classification-large-test-Tristan__z-8b146c-1511954902
[ "autotrain", "evaluation", "region:us" ]
2022-09-19T21:26:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-30b-copy", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-21T04:08:06+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-30b-copy * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Tristan for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-30b-copy\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Tristan for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-30b-copy\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Tristan for evaluating this model." ]
baa096440c81620325d5c6f774eacb668dbd1db8
- 사회과학-en-ko 번역 말뭉치
bongsoo/social_science_en_ko
[ "language:ko", "license:apache-2.0", "region:us" ]
2022-09-20T03:45:54+00:00
{"language": ["ko"], "license": "apache-2.0"}
2022-10-04T23:09:30+00:00
[]
[ "ko" ]
TAGS #language-Korean #license-apache-2.0 #region-us
- 사회과학-en-ko 번역 말뭉치
[]
[ "TAGS\n#language-Korean #license-apache-2.0 #region-us \n" ]
8ffecf6e6c61389f9c02f13f3875d810ff506fa3
- 뉴스&일상대화 en-ko 번역 말뭉치
bongsoo/news_talk_en_ko
[ "language:ko", "license:apache-2.0", "region:us" ]
2022-09-20T04:10:56+00:00
{"language": ["ko"], "license": "apache-2.0"}
2022-10-04T23:09:50+00:00
[]
[ "ko" ]
TAGS #language-Korean #license-apache-2.0 #region-us
- 뉴스&일상대화 en-ko 번역 말뭉치
[]
[ "TAGS\n#language-Korean #license-apache-2.0 #region-us \n" ]
d356ef19a4eb287e88a51d07a56b73ba88c7f188
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
ai4bharat/IndicCOPA
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:extended|xcopa", "language:as", "language:bn", "language:en", "language:gom", "language:gu", "language:hi", "language:kn", "language:mai", "language:ml", "language:mr", "language:ne", "language:or", "language:pa", "language:sa", "language:sat", "language:sd", "language:ta", "language:te", "language:ur", "license:cc-by-4.0", "region:us" ]
2022-09-20T07:18:35+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["as", "bn", "en", "gom", "gu", "hi", "kn", "mai", "ml", "mr", "ne", "or", "pa", "sa", "sat", "sd", "ta", "te", "ur"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|xcopa"], "task_categories": ["multiple-choice"], "task_ids": ["multiple-choice-qa"], "pretty_name": "IndicXCOPA", "tags": []}
2022-12-15T11:34:32+00:00
[]
[ "as", "bn", "en", "gom", "gu", "hi", "kn", "mai", "ml", "mr", "ne", "or", "pa", "sa", "sat", "sd", "ta", "te", "ur" ]
TAGS #task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-extended|xcopa #language-Assamese #language-Bengali #language-English #language-Goan Konkani #language-Gujarati #language-Hindi #language-Kannada #language-Maithili #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Oriya (macrolanguage) #language-Panjabi #language-Sanskrit #language-Santali #language-Sindhi #language-Tamil #language-Telugu #language-Urdu #license-cc-by-4.0 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-extended|xcopa #language-Assamese #language-Bengali #language-English #language-Goan Konkani #language-Gujarati #language-Hindi #language-Kannada #language-Maithili #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Oriya (macrolanguage) #language-Panjabi #language-Sanskrit #language-Santali #language-Sindhi #language-Tamil #language-Telugu #language-Urdu #license-cc-by-4.0 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
fcbf84785bd5d498892cf01a322a92bb1a17f9bb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-373400-1514054915
[ "autotrain", "evaluation", "region:us" ]
2022-09-20T08:57:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-09-21T14:33:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
97139a9fbab6912b3fd89604427d4304d20847e6
# Dataset Card for RSDO4 en-sl parallel corpus ### Dataset Summary The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training. ### Supported Tasks and Leaderboards Machine translation. ### Languages English, Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'en_seq': 'the total value of its assets exceeds EUR 30000000000;', 'sl_seq': 'skupna vrednost njenih sredstev presega 30000000000 EUR' } ``` ### Data Fields - `en_seq`: a string containing the English sequence; - `sl_seq`: a string containing the Slovene sequence. ## Additional Information ### Dataset Curators Andraž Repar and Iztok Lebar Bajec. ### Licensing Information CC BY-SA 4.0. ### Citation Information ``` @misc{rsdo4_en_sl, title = {Parallel corpus {EN}-{SL} {RSDO4} 1.0}, author = {Repar, Andra{\v z} and Lebar Bajec, Iztok}, url = {http://hdl.handle.net/11356/1457}, year = {2021} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/rsdo4_en_sl
[ "task_categories:translation", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:translation", "size_categories:100K<n<1M", "language:en", "language:sl", "license:cc-by-sa-4.0", "parallel data", "rsdo", "region:us" ]
2022-09-20T14:23:40+00:00
{"annotations_creators": ["expert-generated", "found"], "language_creators": ["crowdsourced"], "language": ["en", "sl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["translation", "text2text-generation", "text-generation"], "task_ids": [], "pretty_name": "RSDO4 en-sl parallel corpus", "tags": ["parallel data", "rsdo"]}
2022-09-20T16:38:33+00:00
[]
[ "en", "sl" ]
TAGS #task_categories-translation #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #annotations_creators-found #language_creators-crowdsourced #multilinguality-translation #size_categories-100K<n<1M #language-English #language-Slovenian #license-cc-by-sa-4.0 #parallel data #rsdo #region-us
# Dataset Card for RSDO4 en-sl parallel corpus ### Dataset Summary The RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training. ### Supported Tasks and Leaderboards Machine translation. ### Languages English, Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ### Data Fields - 'en_seq': a string containing the English sequence; - 'sl_seq': a string containing the Slovene sequence. ## Additional Information ### Dataset Curators Andraž Repar and Iztok Lebar Bajec. ### Licensing Information CC BY-SA 4.0. ### Contributions Thanks to @matejklemen for adding this dataset.
[ "# Dataset Card for RSDO4 en-sl parallel corpus", "### Dataset Summary\n\nThe RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training.", "### Supported Tasks and Leaderboards\n\nMachine translation.", "### Languages\n\nEnglish, Slovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset:", "### Data Fields\n\n- 'en_seq': a string containing the English sequence; \n- 'sl_seq': a string containing the Slovene sequence.", "## Additional Information", "### Dataset Curators\n\nAndraž Repar and Iztok Lebar Bajec.", "### Licensing Information\n\nCC BY-SA 4.0.", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
[ "TAGS\n#task_categories-translation #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #annotations_creators-found #language_creators-crowdsourced #multilinguality-translation #size_categories-100K<n<1M #language-English #language-Slovenian #license-cc-by-sa-4.0 #parallel data #rsdo #region-us \n", "# Dataset Card for RSDO4 en-sl parallel corpus", "### Dataset Summary\n\nThe RSDO4 parallel corpus of English-Slovene and Slovene-English translation pairs was collected as part of work package 4 of the Slovene in the Digital Environment project. It contains texts collected from public institutions and texts submitted by individual donors through the text collection portal created within the project. The corpus consists of 964433 translation pairs (extracted from standard translation formats (TMX, XLIFF) or manually aligned) in randomized order which can be used for machine translation training.", "### Supported Tasks and Leaderboards\n\nMachine translation.", "### Languages\n\nEnglish, Slovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset:", "### Data Fields\n\n- 'en_seq': a string containing the English sequence; \n- 'sl_seq': a string containing the Slovene sequence.", "## Additional Information", "### Dataset Curators\n\nAndraž Repar and Iztok Lebar Bajec.", "### Licensing Information\n\nCC BY-SA 4.0.", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
62c78627f3072a1454fa0cb0184737cafe5e4198
# HumanEval-X ## Dataset Description [HumanEval-X](https://github.com/THUDM/CodeGeeX) is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation. ## Languages The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go. ## Dataset Structure To load the dataset you need to specify a subset among the 5 exiting languages `[python, cpp, go, java, js]`. By default `python` is loaded. ```python from datasets import load_dataset load_dataset("THUDM/humaneval-x", "js") DatasetDict({ test: Dataset({ features: ['task_id', 'prompt', 'declaration', 'canonical_solution', 'test', 'example_test'], num_rows: 164 }) }) ``` ```python next(iter(data["test"])) {'task_id': 'JavaScript/0', 'prompt': '/* Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> hasCloseElements([1.0, 2.0, 3.0], 0.5)\n false\n >>> hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n true\n */\nconst hasCloseElements = (numbers, threshold) => {\n', 'declaration': '\nconst hasCloseElements = (numbers, threshold) => {\n', 'canonical_solution': ' for (let i = 0; i < numbers.length; i++) {\n for (let j = 0; j < numbers.length; j++) {\n if (i != j) {\n let distance = Math.abs(numbers[i] - numbers[j]);\n if (distance < threshold) {\n return true;\n }\n }\n }\n }\n return false;\n}\n\n', 'test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) === true)\n console.assert(\n hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) === false\n )\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) === true)\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) === false)\n console.assert(hasCloseElements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) === false)\n}\n\ntestHasCloseElements()\n', 'example_test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.0], 0.5) === false)\n console.assert(\n hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) === true\n )\n}\ntestHasCloseElements()\n'} ``` ## Data Fields * ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"]. * ``prompt``: the function declaration and docstring, used for code generation. * ``declaration``: only the function declaration, used for code translation. * ``canonical_solution``: human-crafted example solutions. * ``test``: hidden test samples, used for evaluation. * ``example_test``: public test samples (appeared in prompt), used for evaluation. ## Data Splits Each subset has one split: test. ## Citation Information Refer to https://github.com/THUDM/CodeGeeX.
THUDM/humaneval-x
[ "task_categories:text-generation", "task_ids:language-modeling", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:unknown", "language:code", "license:apache-2.0", "region:us" ]
2022-09-20T15:23:53+00:00
{"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "HumanEval-X"}
2022-10-25T05:08:38+00:00
[]
[ "code" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-apache-2.0 #region-us
# HumanEval-X ## Dataset Description HumanEval-X is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation. ## Languages The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go. ## Dataset Structure To load the dataset you need to specify a subset among the 5 exiting languages '[python, cpp, go, java, js]'. By default 'python' is loaded. ## Data Fields * ''task_id'': indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"]. * ''prompt'': the function declaration and docstring, used for code generation. * ''declaration'': only the function declaration, used for code translation. * ''canonical_solution'': human-crafted example solutions. * ''test'': hidden test samples, used for evaluation. * ''example_test'': public test samples (appeared in prompt), used for evaluation. ## Data Splits Each subset has one split: test. Refer to URL
[ "# HumanEval-X", "## Dataset Description\nHumanEval-X is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.", "## Languages\n\nThe dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.", "## Dataset Structure\nTo load the dataset you need to specify a subset among the 5 exiting languages '[python, cpp, go, java, js]'. By default 'python' is loaded.", "## Data Fields\n\n* ''task_id'': indicates the target language and ID of the problem. Language is one of [\"Python\", \"Java\", \"JavaScript\", \"CPP\", \"Go\"].\n* ''prompt'': the function declaration and docstring, used for code generation.\n* ''declaration'': only the function declaration, used for code translation. \n* ''canonical_solution'': human-crafted example solutions.\n* ''test'': hidden test samples, used for evaluation.\n* ''example_test'': public test samples (appeared in prompt), used for evaluation.", "## Data Splits\n\nEach subset has one split: test.\n\n\n\nRefer to URL" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-apache-2.0 #region-us \n", "# HumanEval-X", "## Dataset Description\nHumanEval-X is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.", "## Languages\n\nThe dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.", "## Dataset Structure\nTo load the dataset you need to specify a subset among the 5 exiting languages '[python, cpp, go, java, js]'. By default 'python' is loaded.", "## Data Fields\n\n* ''task_id'': indicates the target language and ID of the problem. Language is one of [\"Python\", \"Java\", \"JavaScript\", \"CPP\", \"Go\"].\n* ''prompt'': the function declaration and docstring, used for code generation.\n* ''declaration'': only the function declaration, used for code translation. \n* ''canonical_solution'': human-crafted example solutions.\n* ''test'': hidden test samples, used for evaluation.\n* ''example_test'': public test samples (appeared in prompt), used for evaluation.", "## Data Splits\n\nEach subset has one split: test.\n\n\n\nRefer to URL" ]
09a7ed9517756e50b961dd44c17d91b2a9292bb0
# pytorch-image-models metrics This dataset contains metrics about the huggingface/pytorch-image-models package. Number of repositories in the dataset: 3615 Number of packages in the dataset: 89 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/pytorch-image-models/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![pytorch-image-models-dependent package star count](./pytorch-image-models-dependents/resolve/main/pytorch-image-models-dependent_package_star_count.png) | ![pytorch-image-models-dependent repository star count](./pytorch-image-models-dependents/resolve/main/pytorch-image-models-dependent_repository_star_count.png) There are 18 packages that have more than 1000 stars. There are 39 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70536 [fastai/fastai](https://github.com/fastai/fastai): 22776 [open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 21390 [MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): 6424 [qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 6115 [awslabs/autogluon](https://github.com/awslabs/autogluon): 4818 [neuml/txtai](https://github.com/neuml/txtai): 2531 [open-mmlab/mmaction2](https://github.com/open-mmlab/mmaction2): 2357 [open-mmlab/mmselfsup](https://github.com/open-mmlab/mmselfsup): 2271 [lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 1999 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70536 [commaai/openpilot](https://github.com/commaai/openpilot): 35919 [facebookresearch/detectron2](https://github.com/facebookresearch/detectron2): 22287 [ray-project/ray](https://github.com/ray-project/ray): 22057 [open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 21390 [NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples): 9260 [microsoft/unilm](https://github.com/microsoft/unilm): 6664 [pytorch/tutorials](https://github.com/pytorch/tutorials): 6331 [qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 6115 [hpcaitech/ColossalAI](https://github.com/hpcaitech/ColossalAI): 4944 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![pytorch-image-models-dependent package forks count](./pytorch-image-models-dependents/resolve/main/pytorch-image-models-dependent_package_forks_count.png) | ![pytorch-image-models-dependent repository forks count](./pytorch-image-models-dependents/resolve/main/pytorch-image-models-dependent_repository_forks_count.png) There are 12 packages that have more than 200 forks. There are 28 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16175 [open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 7791 [fastai/fastai](https://github.com/fastai/fastai): 7296 [MVIG-SJTU/AlphaPose](https://github.com/MVIG-SJTU/AlphaPose): 1765 [qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 1217 [open-mmlab/mmaction2](https://github.com/open-mmlab/mmaction2): 787 [awslabs/autogluon](https://github.com/awslabs/autogluon): 638 [open-mmlab/mmselfsup](https://github.com/open-mmlab/mmselfsup): 321 [rwightman/efficientdet-pytorch](https://github.com/rwightman/efficientdet-pytorch): 265 [lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 247 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16175 [open-mmlab/mmdetection](https://github.com/open-mmlab/mmdetection): 7791 [commaai/openpilot](https://github.com/commaai/openpilot): 6603 [facebookresearch/detectron2](https://github.com/facebookresearch/detectron2): 6033 [ray-project/ray](https://github.com/ray-project/ray): 3879 [pytorch/tutorials](https://github.com/pytorch/tutorials): 3478 [NVIDIA/DeepLearningExamples](https://github.com/NVIDIA/DeepLearningExamples): 2499 [microsoft/unilm](https://github.com/microsoft/unilm): 1223 [qubvel/segmentation_models.pytorch](https://github.com/qubvel/segmentation_models.pytorch): 1217 [layumi/Person_reID_baseline_pytorch](https://github.com/layumi/Person_reID_baseline_pytorch): 928
open-source-metrics/pytorch-image-models-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-20T17:47:36+00:00
{"license": "apache-2.0", "pretty_name": "pytorch-image-models metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "null"}, {"name": "stars", "dtype": "null"}, {"name": "forks", "dtype": "null"}], "splits": [{"name": "package"}, {"name": "repository"}], "download_size": 1798, "dataset_size": 0}}
2024-02-16T20:19:14+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
pytorch-image-models metrics ============================ This dataset contains metrics about the huggingface/pytorch-image-models package. Number of repositories in the dataset: 3615 Number of packages in the dataset: 89 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 18 packages that have more than 1000 stars. There are 39 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* huggingface/transformers: 70536 fastai/fastai: 22776 open-mmlab/mmdetection: 21390 MVIG-SJTU/AlphaPose: 6424 qubvel/segmentation\_models.pytorch: 6115 awslabs/autogluon: 4818 neuml/txtai: 2531 open-mmlab/mmaction2: 2357 open-mmlab/mmselfsup: 2271 lukas-blecher/LaTeX-OCR: 1999 *Repository* huggingface/transformers: 70536 commaai/openpilot: 35919 facebookresearch/detectron2: 22287 ray-project/ray: 22057 open-mmlab/mmdetection: 21390 NVIDIA/DeepLearningExamples: 9260 microsoft/unilm: 6664 pytorch/tutorials: 6331 qubvel/segmentation\_models.pytorch: 6115 hpcaitech/ColossalAI: 4944 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 12 packages that have more than 200 forks. There are 28 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* huggingface/transformers: 16175 open-mmlab/mmdetection: 7791 fastai/fastai: 7296 MVIG-SJTU/AlphaPose: 1765 qubvel/segmentation\_models.pytorch: 1217 open-mmlab/mmaction2: 787 awslabs/autogluon: 638 open-mmlab/mmselfsup: 321 rwightman/efficientdet-pytorch: 265 lukas-blecher/LaTeX-OCR: 247 *Repository* huggingface/transformers: 16175 open-mmlab/mmdetection: 7791 commaai/openpilot: 6603 facebookresearch/detectron2: 6033 ray-project/ray: 3879 pytorch/tutorials: 3478 NVIDIA/DeepLearningExamples: 2499 microsoft/unilm: 1223 qubvel/segmentation\_models.pytorch: 1217 layumi/Person\_reID\_baseline\_pytorch: 928
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 18 packages that have more than 1000 stars.\n\n\nThere are 39 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70536\n\n\nfastai/fastai: 22776\n\n\nopen-mmlab/mmdetection: 21390\n\n\nMVIG-SJTU/AlphaPose: 6424\n\n\nqubvel/segmentation\\_models.pytorch: 6115\n\n\nawslabs/autogluon: 4818\n\n\nneuml/txtai: 2531\n\n\nopen-mmlab/mmaction2: 2357\n\n\nopen-mmlab/mmselfsup: 2271\n\n\nlukas-blecher/LaTeX-OCR: 1999\n\n\n*Repository*\n\n\nhuggingface/transformers: 70536\n\n\ncommaai/openpilot: 35919\n\n\nfacebookresearch/detectron2: 22287\n\n\nray-project/ray: 22057\n\n\nopen-mmlab/mmdetection: 21390\n\n\nNVIDIA/DeepLearningExamples: 9260\n\n\nmicrosoft/unilm: 6664\n\n\npytorch/tutorials: 6331\n\n\nqubvel/segmentation\\_models.pytorch: 6115\n\n\nhpcaitech/ColossalAI: 4944", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 12 packages that have more than 200 forks.\n\n\nThere are 28 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16175\n\n\nopen-mmlab/mmdetection: 7791\n\n\nfastai/fastai: 7296\n\n\nMVIG-SJTU/AlphaPose: 1765\n\n\nqubvel/segmentation\\_models.pytorch: 1217\n\n\nopen-mmlab/mmaction2: 787\n\n\nawslabs/autogluon: 638\n\n\nopen-mmlab/mmselfsup: 321\n\n\nrwightman/efficientdet-pytorch: 265\n\n\nlukas-blecher/LaTeX-OCR: 247\n\n\n*Repository*\n\n\nhuggingface/transformers: 16175\n\n\nopen-mmlab/mmdetection: 7791\n\n\ncommaai/openpilot: 6603\n\n\nfacebookresearch/detectron2: 6033\n\n\nray-project/ray: 3879\n\n\npytorch/tutorials: 3478\n\n\nNVIDIA/DeepLearningExamples: 2499\n\n\nmicrosoft/unilm: 1223\n\n\nqubvel/segmentation\\_models.pytorch: 1217\n\n\nlayumi/Person\\_reID\\_baseline\\_pytorch: 928" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 18 packages that have more than 1000 stars.\n\n\nThere are 39 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70536\n\n\nfastai/fastai: 22776\n\n\nopen-mmlab/mmdetection: 21390\n\n\nMVIG-SJTU/AlphaPose: 6424\n\n\nqubvel/segmentation\\_models.pytorch: 6115\n\n\nawslabs/autogluon: 4818\n\n\nneuml/txtai: 2531\n\n\nopen-mmlab/mmaction2: 2357\n\n\nopen-mmlab/mmselfsup: 2271\n\n\nlukas-blecher/LaTeX-OCR: 1999\n\n\n*Repository*\n\n\nhuggingface/transformers: 70536\n\n\ncommaai/openpilot: 35919\n\n\nfacebookresearch/detectron2: 22287\n\n\nray-project/ray: 22057\n\n\nopen-mmlab/mmdetection: 21390\n\n\nNVIDIA/DeepLearningExamples: 9260\n\n\nmicrosoft/unilm: 6664\n\n\npytorch/tutorials: 6331\n\n\nqubvel/segmentation\\_models.pytorch: 6115\n\n\nhpcaitech/ColossalAI: 4944", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 12 packages that have more than 200 forks.\n\n\nThere are 28 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16175\n\n\nopen-mmlab/mmdetection: 7791\n\n\nfastai/fastai: 7296\n\n\nMVIG-SJTU/AlphaPose: 1765\n\n\nqubvel/segmentation\\_models.pytorch: 1217\n\n\nopen-mmlab/mmaction2: 787\n\n\nawslabs/autogluon: 638\n\n\nopen-mmlab/mmselfsup: 321\n\n\nrwightman/efficientdet-pytorch: 265\n\n\nlukas-blecher/LaTeX-OCR: 247\n\n\n*Repository*\n\n\nhuggingface/transformers: 16175\n\n\nopen-mmlab/mmdetection: 7791\n\n\ncommaai/openpilot: 6603\n\n\nfacebookresearch/detectron2: 6033\n\n\nray-project/ray: 3879\n\n\npytorch/tutorials: 3478\n\n\nNVIDIA/DeepLearningExamples: 2499\n\n\nmicrosoft/unilm: 1223\n\n\nqubvel/segmentation\\_models.pytorch: 1217\n\n\nlayumi/Person\\_reID\\_baseline\\_pytorch: 928" ]
f0f93f25d29f82efdd73689b88b36c8fc85d4e41
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-431a89-1518654983
[ "autotrain", "evaluation", "region:us" ]
2022-09-20T21:48:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-20T22:13:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
5a6a80994c21d0d9b4f87e828633e9aa549a4a8c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-7e8d42-1518754984
[ "autotrain", "evaluation", "region:us" ]
2022-09-20T21:48:34+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-20T22:20:18+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP14\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
850f60cb653353971f22827cf61e6b1d1a2a53a5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-61a81c-1518854985
[ "autotrain", "evaluation", "region:us" ]
2022-09-20T21:48:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-09-22T01:29:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15 * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
bc5a20bfe51eff9d9e3e6bfe9d02ccb09cd15f72
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15 * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-billsum-default-4428b0-1518954986
[ "autotrain", "evaluation", "region:us" ]
2022-09-20T21:48:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}}
2022-09-22T03:13:05+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15 * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
eb2885f64a337ab00115293d9856a96f80b30d40
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-b534aa-1519254997
[ "autotrain", "evaluation", "region:us" ]
2022-09-20T22:47:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-20T23:18:15+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
ae75e6b3d921b85c9a7f5510181d1a32fc140c3c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-billsum-default-dd03f7-1519455003
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T01:14:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}}
2022-09-21T16:34:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: billsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
84e95341fadae3179e6f9418e04ab530f0411814
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-launch__gov_report-plain_text-4ad6c8-1519755004
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T01:15:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-21T06:37:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
8fcbf087a8ba256d1d8ad78d5474126481b43e73
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-big_patent-y-b4cccf-1519855005
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T01:15:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-22T05:24:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
94ff6a5935f6cd3ff8a915f76e6852c4a3667a7f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-a5c306-1520055006
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T01:15:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-21T01:23:40+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
169d0612fccaa4dd7bff2fa33ab533b40aeef69e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-bf100b-1520255007
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T01:15:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": ["rouge"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-21T01:23:16+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
523d566065cd18bc42172c82f9ffa933eaf29b05
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-66b-copy * Dataset: Tristan/zero_shot_classification_test * Config: Tristan--zero_shot_classification_test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model.
autoevaluate/autoeval-eval-Tristan__zero_shot_classification_test-Tristan__zero_sh-c10c5c-1520355008
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T01:23:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero_shot_classification_test"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-66b-copy", "metrics": [], "dataset_name": "Tristan/zero_shot_classification_test", "dataset_config": "Tristan--zero_shot_classification_test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-21T02:16:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-66b-copy * Dataset: Tristan/zero_shot_classification_test * Config: Tristan--zero_shot_classification_test * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Tristan for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-66b-copy\n* Dataset: Tristan/zero_shot_classification_test\n* Config: Tristan--zero_shot_classification_test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Tristan for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-66b-copy\n* Dataset: Tristan/zero_shot_classification_test\n* Config: Tristan--zero_shot_classification_test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Tristan for evaluating this model." ]
5d3309b8aa10d7cf28752a9589c8a8a99325e069
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ColdYoungGuy](https://huggingface.co/ColdYoungGuy) for evaluating this model.
autoevaluate/autoeval-eval-squad_v2-squad_v2-e4ddf6-1520555010
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T03:30:06+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-09-21T03:32:36+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ColdYoungGuy for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ColdYoungGuy for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: SebastianS/distilbert-base-uncased-finetuned-squad-d5716d28\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ColdYoungGuy for evaluating this model." ]
f9fb35f4134e32b9c8100199d949398fd6d08a5f
We partition the earnings22 dataset at https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram by `source_id`: Validation: 4420696 4448760 4461799 4469836 4473238 4482110 Test: 4432298 4450488 4470290 4479741 4483338 4485244 Train: remainder Official script for processing these splits will be released shortly.
sanchit-gandhi/earnings22_split
[ "region:us" ]
2022-09-21T09:35:49+00:00
{}
2022-09-23T08:44:26+00:00
[]
[]
TAGS #region-us
We partition the earnings22 dataset at URL by 'source_id': Validation: 4420696 4448760 4461799 4469836 4473238 4482110 Test: 4432298 4450488 4470290 4479741 4483338 4485244 Train: remainder Official script for processing these splits will be released shortly.
[]
[ "TAGS\n#region-us \n" ]
16c96aacfd2f858c7577cd1944a8e67992036e8c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-e42237-1523455078
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T10:41:40+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-x-large-book-summary", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}}
2022-09-21T17:28:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/pegasus-x-large-book-summary * Dataset: kmfoda/booksum * Config: kmfoda--booksum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-x-large-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
7c1cc64b8570c0d0882b285941fd625c4bbb886c
# 1 Source Source: https://github.com/alibaba-research/ChineseBLUE # 2 Definition of the tagset ```python tag_set = [ 'B_手术', 'I_疾病和诊断', 'B_症状', 'I_解剖部位', 'I_药物', 'B_影像检查', 'B_药物', 'B_疾病和诊断', 'I_影像检查', 'I_手术', 'B_解剖部位', 'O', 'B_实验室检验', 'I_症状', 'I_实验室检验' ] tag2id = lambda tag: tag_set.index(tag) id2tag = lambda id: tag_set[id] ``` # 3 Citation To use this dataset in your work please cite: Ningyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining ``` @article{zhang2020conceptualized, title={Conceptualized Representation Learning for Chinese Biomedical Text Mining}, author={Zhang, Ningyu and Jia, Qianghuai and Yin, Kangping and Dong, Liang and Gao, Feng and Hua, Nengwei}, journal={arXiv preprint arXiv:2008.10813}, year={2020} } ```
Adapting/chinese_biomedical_NER_dataset
[ "license:mit", "region:us" ]
2022-09-21T11:52:05+00:00
{"license": "mit"}
2022-09-21T17:21:15+00:00
[]
[]
TAGS #license-mit #region-us
# 1 Source Source: URL # 2 Definition of the tagset # 3 Citation To use this dataset in your work please cite: Ningyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining
[ "# 1 Source\nSource: URL", "# 2 Definition of the tagset", "# 3 Citation\nTo use this dataset in your work please cite:\n\nNingyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining" ]
[ "TAGS\n#license-mit #region-us \n", "# 1 Source\nSource: URL", "# 2 Definition of the tagset", "# 3 Citation\nTo use this dataset in your work please cite:\n\nNingyu Zhang, Qianghuai Jia, Kangping Yin, Liang Dong, Feng Gao, Nengwei Hua. Conceptualized Representation Learning for Chinese Biomedical Text Mining" ]
51d9269a2818c7fe39b9380efc9a62f40a8e5b2e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-bf74a8-1524255094
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T14:21:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-21T17:43:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
662fce7ab3d2e18087973b1f15470b1dfaf81f9e
# Dataset Card for TellMeWhy ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://stonybrooknlp.github.io/tellmewhy/ - **Repository:** https://github.com/StonyBrookNLP/tellmewhy - **Paper:** https://aclanthology.org/2021.findings-acl.53/ - **Leaderboard:** None - **Point of Contact:** [Yash Kumar Lal](mailto:[email protected]) ### Dataset Summary TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described. ### Supported Tasks and Leaderboards The dataset is designed to test why-question answering abilities of models when bound by local context. ### Languages English ## Dataset Structure ### Data Instances A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context. ``` { "narrative":"Cam ordered a pizza and took it home. He opened the box to take out a slice. Cam discovered that the store did not cut the pizza for him. He looked for his pizza cutter but did not find it. He had to use his chef knife to cut a slice.", "question":"Why did Cam order a pizza?", "original_sentence_for_question":"Cam ordered a pizza and took it home.", "narrative_lexical_overlap":0.3333333333, "is_ques_answerable":"Not Answerable", "answer":"Cam was hungry.", "is_ques_answerable_annotator":"Not Answerable", "original_narrative_form":[ "Cam ordered a pizza and took it home.", "He opened the box to take out a slice.", "Cam discovered that the store did not cut the pizza for him.", "He looked for his pizza cutter but did not find it.", "He had to use his chef knife to cut a slice." ], "question_meta":"rocstories_narrative_41270_sentence_0_question_0", "helpful_sentences":[ ], "human_eval":false, "val_ann":[ ], "gram_ann":[ ] } ``` ### Data Fields - `question_meta` - Unique meta for each question in the corpus - `narrative` - Full narrative from ROCStories. Used as the context with which the question and answer are associated - `question` - Why question about an action or event in the narrative - `answer` - Crowdsourced answer to the question - `original_sentence_for_question` - Sentence in narrative from which question was generated - `narrative_lexical_overlap` - Unigram overlap of answer with the narrative - `is_ques_answerable` - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models. - `is_ques_answerable_annotator` - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative. - `original_narrative_form` - ROCStories narrative as an array of its sentences - `human_eval` - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset. - `val_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human_eval flag is False. - `gram_ann` - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human_eval flag is False. ### Data Splits The data is split into training, valiudation, and test sets. | Train | Valid | Test | | ------ | ----- | ----- | | 23964 | 2992 | 3563 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data ROCStories corpus (Mostafazadeh et al, 2016) #### Initial Data Collection and Normalization ROCStories was used to create why-questions related to actions and events in the stories. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset. #### Who are the annotators? Amazon Mechanical Turk workers ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Evaluation To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings [here](https://github.com/StonyBrookNLP/tellmewhy). Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the `human_eval` flag is set to `True`. This subset can then be used to complete the requisite evaluation on TellMeWhy. ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{lal-etal-2021-tellmewhy, title = "{T}ell{M}e{W}hy: A Dataset for Answering Why-Questions in Narratives", author = "Lal, Yash Kumar and Chambers, Nathanael and Mooney, Raymond and Balasubramanian, Niranjan", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.53", doi = "10.18653/v1/2021.findings-acl.53", pages = "596--610", } ``` ### Contributions Thanks to [@yklal95](https://github.com/ykl7) for adding this dataset.
StonyBrookNLP/tellmewhy
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-09-21T15:11:29+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "TellMeWhy"}
2024-01-24T21:12:22+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for TellMeWhy ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: None * Point of Contact: Yash Kumar Lal ### Dataset Summary TellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described. ### Supported Tasks and Leaderboards The dataset is designed to test why-question answering abilities of models when bound by local context. ### Languages English Dataset Structure ----------------- ### Data Instances A typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context. ### Data Fields * 'question\_meta' - Unique meta for each question in the corpus * 'narrative' - Full narrative from ROCStories. Used as the context with which the question and answer are associated * 'question' - Why question about an action or event in the narrative * 'answer' - Crowdsourced answer to the question * 'original\_sentence\_for\_question' - Sentence in narrative from which question was generated * 'narrative\_lexical\_overlap' - Unigram overlap of answer with the narrative * 'is\_ques\_answerable' - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If "Not Answerable", it is part of the Implicit-Answer questions subset, which is harder for models. * 'is\_ques\_answerable\_annotator' - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative. * 'original\_narrative\_form' - ROCStories narrative as an array of its sentences * 'human\_eval' - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset. * 'val\_ann' - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human\_eval flag is False. * 'gram\_ann' - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human\_eval flag is False. ### Data Splits The data is split into training, valiudation, and test sets. Train: 23964, Valid: 2992, Test: 3563 Dataset Creation ---------------- ### Curation Rationale ### Source Data ROCStories corpus (Mostafazadeh et al, 2016) #### Initial Data Collection and Normalization ROCStories was used to create why-questions related to actions and events in the stories. #### Who are the source language producers? ### Annotations #### Annotation process Amazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset. #### Who are the annotators? Amazon Mechanical Turk workers ### Personal and Sensitive Information None Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Evaluation To evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings here. Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the 'human\_eval' flag is set to 'True'. This subset can then be used to complete the requisite evaluation on TellMeWhy. ### Dataset Curators ### Licensing Information ### Contributions Thanks to @yklal95 for adding this dataset.
[ "### Dataset Summary\n\n\nTellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described.", "### Supported Tasks and Leaderboards\n\n\nThe dataset is designed to test why-question answering abilities of models when bound by local context.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context.", "### Data Fields\n\n\n* 'question\\_meta' - Unique meta for each question in the corpus\n* 'narrative' - Full narrative from ROCStories. Used as the context with which the question and answer are associated\n* 'question' - Why question about an action or event in the narrative\n* 'answer' - Crowdsourced answer to the question\n* 'original\\_sentence\\_for\\_question' - Sentence in narrative from which question was generated\n* 'narrative\\_lexical\\_overlap' - Unigram overlap of answer with the narrative\n* 'is\\_ques\\_answerable' - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If \"Not Answerable\", it is part of the Implicit-Answer questions subset, which is harder for models.\n* 'is\\_ques\\_answerable\\_annotator' - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative.\n* 'original\\_narrative\\_form' - ROCStories narrative as an array of its sentences\n* 'human\\_eval' - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset.\n* 'val\\_ann' - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human\\_eval flag is False.\n* 'gram\\_ann' - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human\\_eval flag is False.", "### Data Splits\n\n\nThe data is split into training, valiudation, and test sets.\n\n\nTrain: 23964, Valid: 2992, Test: 3563\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nROCStories corpus (Mostafazadeh et al, 2016)", "#### Initial Data Collection and Normalization\n\n\nROCStories was used to create why-questions related to actions and events in the stories.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nAmazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset.", "#### Who are the annotators?\n\n\nAmazon Mechanical Turk workers", "### Personal and Sensitive Information\n\n\nNone\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Evaluation\n\n\nTo evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings here. Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the 'human\\_eval' flag is set to 'True'. This subset can then be used to complete the requisite evaluation on TellMeWhy.", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @yklal95 for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nTellMeWhy is a large-scale crowdsourced dataset made up of more than 30k questions and free-form answers concerning why characters in short narratives perform the actions described.", "### Supported Tasks and Leaderboards\n\n\nThe dataset is designed to test why-question answering abilities of models when bound by local context.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point consists of a story, a question and a crowdsourced answer to that question. Additionally, the instance also indicates whether the question's answer would be implicit or if it is explicitly stated in text. If applicable, it also contains Likert scores (-2 to 2) about the answer's grammaticality and validity in the given context.", "### Data Fields\n\n\n* 'question\\_meta' - Unique meta for each question in the corpus\n* 'narrative' - Full narrative from ROCStories. Used as the context with which the question and answer are associated\n* 'question' - Why question about an action or event in the narrative\n* 'answer' - Crowdsourced answer to the question\n* 'original\\_sentence\\_for\\_question' - Sentence in narrative from which question was generated\n* 'narrative\\_lexical\\_overlap' - Unigram overlap of answer with the narrative\n* 'is\\_ques\\_answerable' - Majority judgment by annotators on whether an answer to this question is explicitly stated in the narrative. If \"Not Answerable\", it is part of the Implicit-Answer questions subset, which is harder for models.\n* 'is\\_ques\\_answerable\\_annotator' - Individual annotator judgment on whether an answer to this question is explicitly stated in the narrative.\n* 'original\\_narrative\\_form' - ROCStories narrative as an array of its sentences\n* 'human\\_eval' - Indicates whether a question is a specific part of the test set. Models should be evaluated for their answers on these questions using the human evaluation suite released by the authors. They advocate for this human evaluation to be the correct way to track progress on this dataset.\n* 'val\\_ann' - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is valid given the question and context. Empty arrays exist for cases where the human\\_eval flag is False.\n* 'gram\\_ann' - Array of Likert scores (possible sizes are 0 and 3) about whether an answer is grammatical. Empty arrays exist for cases where the human\\_eval flag is False.", "### Data Splits\n\n\nThe data is split into training, valiudation, and test sets.\n\n\nTrain: 23964, Valid: 2992, Test: 3563\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nROCStories corpus (Mostafazadeh et al, 2016)", "#### Initial Data Collection and Normalization\n\n\nROCStories was used to create why-questions related to actions and events in the stories.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nAmazon Mechanical Turk workers were provided a story and an associated why-question, and asked to answer. Three answers were collected for each question. For a small subset of questions, the quality of answers was also validated in a second round of annotation. This smaller subset should be used to perform human evaluation of any new models built for this dataset.", "#### Who are the annotators?\n\n\nAmazon Mechanical Turk workers", "### Personal and Sensitive Information\n\n\nNone\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Evaluation\n\n\nTo evaluate progress on this dataset, the authors advocate for human evaluation and release a suite with the required settings here. Once inference on the test set has been completed, please filter out the answers on which human evaluation needs to be performed by selecting the questions (one answer per question, deduplication might be needed) in the test set where the 'human\\_eval' flag is set to 'True'. This subset can then be used to complete the requisite evaluation on TellMeWhy.", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @yklal95 for adding this dataset." ]
0af0ec66aa94b834cd671169833768ef6063285e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: mathemakitten/opt-125m * Dataset: mathemakitten/winobias_antistereotype_dev * Config: mathemakitten--winobias_antistereotype_dev * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-169e67-1524755111
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T16:28:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "mathemakitten/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-21T16:48:48+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: mathemakitten/opt-125m * Dataset: mathemakitten/winobias_antistereotype_dev * Config: mathemakitten--winobias_antistereotype_dev * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: mathemakitten/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_dev\n* Config: mathemakitten--winobias_antistereotype_dev\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: mathemakitten/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_dev\n* Config: mathemakitten--winobias_antistereotype_dev\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
d27fa3d9aea71a1de1cfc280bb534887b05f510d
This dataset consists of Pubchem molecules downloaded from: https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/ There are in total ~85M compounds for training, with an additional ~10M held out for validation and testing.
zpn/pubchem_selfies
[ "license:openrail", "region:us" ]
2022-09-21T18:51:06+00:00
{"license": "openrail"}
2022-10-04T15:15:19+00:00
[]
[]
TAGS #license-openrail #region-us
This dataset consists of Pubchem molecules downloaded from: URL There are in total ~85M compounds for training, with an additional ~10M held out for validation and testing.
[]
[ "TAGS\n#license-openrail #region-us \n" ]
8852346e4b76d1f815e1b272c840d45d7dc08ea8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: mathemakitten/winobias_antistereotype_dev * Config: mathemakitten--winobias_antistereotype_dev * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-f407ed-1527355152
[ "autotrain", "evaluation", "region:us" ]
2022-09-21T21:30:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-21T21:50:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: mathemakitten/winobias_antistereotype_dev * Config: mathemakitten--winobias_antistereotype_dev * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: mathemakitten/winobias_antistereotype_dev\n* Config: mathemakitten--winobias_antistereotype_dev\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: mathemakitten/winobias_antistereotype_dev\n* Config: mathemakitten--winobias_antistereotype_dev\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
3af942a32b98c8e16043ec591f92f5c368ed2953
# Avatar Dataset Raw data stack of 18,000 sample images created for [Avatar AI](https://t.me/AvatarAIBot). ## Features - 256X256 Medium Quality - Micro Bloom
phaticusthiccy/avatar
[ "region:us" ]
2022-09-21T21:30:24+00:00
{}
2022-09-21T21:40:14+00:00
[]
[]
TAGS #region-us
# Avatar Dataset Raw data stack of 18,000 sample images created for Avatar AI. ## Features - 256X256 Medium Quality - Micro Bloom
[ "# Avatar Dataset\n\nRaw data stack of 18,000 sample images created for Avatar AI.", "## Features\n\n- 256X256 Medium Quality\n- Micro Bloom" ]
[ "TAGS\n#region-us \n", "# Avatar Dataset\n\nRaw data stack of 18,000 sample images created for Avatar AI.", "## Features\n\n- 256X256 Medium Quality\n- Micro Bloom" ]
dc30b042b8caa6fc0cdbe7511e1867919f10fd80
# How Resilient are Imitation Learning Methods to Sub-Optimal Experts? ## Related Work Trajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]() The code that uses this data is on GitHub: https://github.com/NathanGavenski/How-resilient-IL-methods-are # Structure These trajectories are formed by using [Stable Baselines](https://stable-baselines.readthedocs.io/en/master/). Each file is a dictionary of a set of trajectories with the following keys: * actions: the action in the given timestamp `t` * obs: current state in the given timestamp `t` * rewards: reward retrieved after the action in the given timestamp `t` * episode_returns: The aggregated reward of each episode (each file consists of 5000 runs) * episode_Starts: Whether that `obs` is the first state of an episode (boolean list) ## Citation Information ``` @inproceedings{gavenski2022how, title={How Resilient are Imitation Learning Methods to Sub-Optimal Experts?}, author={Nathan Gavenski and Juarez Monteiro and Adilson Medronha and Rodrigo Barros}, booktitle={2022 Brazilian Conference on Intelligent Systems (BRACIS)}, year={2022}, organization={IEEE} } ``` ## Contact: - [Nathan Schneider Gavenski]([email protected]) - [Juarez Monteiro]([email protected]) - [Adilson Medronha]([email protected]) - [Rodrigo C. Barros]([email protected])
NathanGavenski/How-Resilient-are-Imitation-Learning-Methods-to-Sub-Optimal-Experts
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:expert-generated", "size_categories:100B<n<1T", "source_datasets:original", "license:mit", "Imitation Learning", "Expert Trajectories", "Classic Control", "region:us" ]
2022-09-21T22:41:37+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["mit"], "multilinguality": [], "size_categories": ["100B<n<1T"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "How Resilient are Imitation Learning Methods to Sub-Optimal Experts?", "tags": ["Imitation Learning", "Expert Trajectories", "Classic Control"]}
2022-10-25T13:48:38+00:00
[]
[]
TAGS #task_categories-other #annotations_creators-machine-generated #language_creators-expert-generated #size_categories-100B<n<1T #source_datasets-original #license-mit #Imitation Learning #Expert Trajectories #Classic Control #region-us
# How Resilient are Imitation Learning Methods to Sub-Optimal Experts? ## Related Work Trajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]() The code that uses this data is on GitHub: URL # Structure These trajectories are formed by using Stable Baselines. Each file is a dictionary of a set of trajectories with the following keys: * actions: the action in the given timestamp 't' * obs: current state in the given timestamp 't' * rewards: reward retrieved after the action in the given timestamp 't' * episode_returns: The aggregated reward of each episode (each file consists of 5000 runs) * episode_Starts: Whether that 'obs' is the first state of an episode (boolean list) ## Contact: - Nathan Schneider Gavenski - Juarez Monteiro - Adilson Medronha - Rodrigo C. Barros
[ "# How Resilient are Imitation Learning Methods to Sub-Optimal Experts?", "## Related Work\nTrajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]()\nThe code that uses this data is on GitHub: URL", "# Structure\nThese trajectories are formed by using Stable Baselines.\nEach file is a dictionary of a set of trajectories with the following keys:\n\n* actions: the action in the given timestamp 't'\n* obs: current state in the given timestamp 't'\n* rewards: reward retrieved after the action in the given timestamp 't'\n* episode_returns: The aggregated reward of each episode (each file consists of 5000 runs)\n* episode_Starts: Whether that 'obs' is the first state of an episode (boolean list)", "## Contact:\n- Nathan Schneider Gavenski\n- Juarez Monteiro\n- Adilson Medronha\n- Rodrigo C. Barros" ]
[ "TAGS\n#task_categories-other #annotations_creators-machine-generated #language_creators-expert-generated #size_categories-100B<n<1T #source_datasets-original #license-mit #Imitation Learning #Expert Trajectories #Classic Control #region-us \n", "# How Resilient are Imitation Learning Methods to Sub-Optimal Experts?", "## Related Work\nTrajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]()\nThe code that uses this data is on GitHub: URL", "# Structure\nThese trajectories are formed by using Stable Baselines.\nEach file is a dictionary of a set of trajectories with the following keys:\n\n* actions: the action in the given timestamp 't'\n* obs: current state in the given timestamp 't'\n* rewards: reward retrieved after the action in the given timestamp 't'\n* episode_returns: The aggregated reward of each episode (each file consists of 5000 runs)\n* episode_Starts: Whether that 'obs' is the first state of an episode (boolean list)", "## Contact:\n- Nathan Schneider Gavenski\n- Juarez Monteiro\n- Adilson Medronha\n- Rodrigo C. Barros" ]
aba349e6b3a4d06820576289db881e37f2d5c5e3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: scan * Config: simple * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@test_yoon_0921](https://huggingface.co/test_yoon_0921) for evaluating this model.
autoevaluate/autoeval-eval-scan-simple-0b9bd3-1528755178
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T03:23:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scan"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-fanpage", "metrics": [], "dataset_name": "scan", "dataset_config": "simple", "dataset_split": "train", "col_mapping": {"text": "commands", "target": "actions"}}}
2022-09-22T03:29:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: scan * Config: simple * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @test_yoon_0921 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: scan\n* Config: simple\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @test_yoon_0921 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: scan\n* Config: simple\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @test_yoon_0921 for evaluating this model." ]
8381f2d7cd133cc20378a943ae802a21e0dd1a11
# AutoTrain Dataset for project: nllb_600_ft ## Dataset Description This dataset has been automatically processed by AutoTrain for project nllb_600_ft. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_id": "772", "feat_URL": "https://en.wikivoyage.org/wiki/Apia", "feat_domain": "wikivoyage", "feat_topic": "Travel", "feat_has_image": "0", "feat_has_hyperlink": "0", "text": "All the ships were sunk, except for one British cruiser. Nearly 200 American and German lives were lost.", "target": "\u0628\u0647\u200c\u062c\u0632 \u06cc\u06a9 \u06a9\u0634\u062a\u06cc \u062c\u0646\u06af\u06cc \u0627\u0646\u06af\u0644\u06cc\u0633\u06cc \u0647\u0645\u0647 \u06a9\u0634\u062a\u06cc\u200c\u0647\u0627 \u063a\u0631\u0642 \u0634\u062f\u0646\u062f\u060c \u0648 \u0646\u0632\u062f\u06cc\u06a9 \u0628\u0647 200 \u0646\u0641\u0631 \u0622\u0645\u0631\u06cc\u06a9\u0627\u06cc\u06cc \u0648 \u0622\u0644\u0645\u0627\u0646\u06cc \u062c\u0627\u0646 \u062e\u0648\u062f \u0631\u0627 \u0627\u0632 \u062f\u0633\u062a \u062f\u0627\u062f\u0646\u062f." }, { "feat_id": "195", "feat_URL": "https://en.wikinews.org/wiki/Mitt_Romney_wins_Iowa_Caucus_by_eight_votes_over_surging_Rick_Santorum", "feat_domain": "wikinews", "feat_topic": "Politics", "feat_has_image": "0", "feat_has_hyperlink": "0", "text": "Bachmann, who won the Ames Straw Poll in August, decided to end her campaign.", "target": "\u0628\u0627\u062e\u0645\u0646\u060c \u06a9\u0647 \u062f\u0631 \u0645\u0627\u0647 \u0622\u06af\u0648\u0633\u062a \u0628\u0631\u0646\u062f\u0647 \u0646\u0638\u0631\u0633\u0646\u062c\u06cc \u0622\u0645\u0633 \u0627\u0633\u062a\u0631\u0627\u0648 \u0634\u062f\u060c \u062a\u0635\u0645\u06cc\u0645 \u06af\u0631\u0641\u062a \u06a9\u0645\u067e\u06cc\u0646 \u062e\u0648\u062f \u0631\u0627 \u062e\u0627\u062a\u0645\u0647 \u062f\u0647\u062f." } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_id": "Value(dtype='string', id=None)", "feat_URL": "Value(dtype='string', id=None)", "feat_domain": "Value(dtype='string', id=None)", "feat_topic": "Value(dtype='string', id=None)", "feat_has_image": "Value(dtype='string', id=None)", "feat_has_hyperlink": "Value(dtype='string', id=None)", "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1608 | | valid | 402 |
mehr4n-m/autotrain-data-nllb_600_ft
[ "region:us" ]
2022-09-22T04:51:54+00:00
{"task_categories": ["conditional-text-generation"]}
2022-09-22T04:54:15+00:00
[]
[]
TAGS #region-us
AutoTrain Dataset for project: nllb\_600\_ft ============================================ Dataset Description ------------------- This dataset has been automatically processed by AutoTrain for project nllb\_600\_ft. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
15477fbdfae891174be78e6285353d67d3b712cb
# Dataset Card for ssj500k **Important**: there exists another HF implementation of the dataset ([classla/ssj500k](https://huggingface.co/datasets/classla/ssj500k)), but it seems to be more narrowly focused. **This implementation is designed for more general use** - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data. ### Dataset Summary The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks: - named entity recognition (config `named_entity_recognition`) - dependency parsing(*), Universal Dependencies style (config `dependency_parsing_ud`) - dependency parsing, JOS/MULTEXT-East style (config `dependency_parsing_jos`) - semantic role labeling (config `semantic_role_labeling`) - multi-word expressions (config `multiword_expressions`) If you want to load all the data along with their partial annotations, please use the config `all_data`. \* _The UD dependency parsing labels are included here for completeness, but using the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._ ### Supported Tasks and Leaderboards Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset (using the config `all_data`): ``` { 'id_doc': 'ssj1', 'idx_par': 0, 'idx_sent': 0, 'id_words': ['ssj1.1.1.t1', 'ssj1.1.1.t2', 'ssj1.1.1.t3', 'ssj1.1.1.t4', 'ssj1.1.1.t5', 'ssj1.1.1.t6', 'ssj1.1.1.t7', 'ssj1.1.1.t8', 'ssj1.1.1.t9', 'ssj1.1.1.t10', 'ssj1.1.1.t11', 'ssj1.1.1.t12', 'ssj1.1.1.t13', 'ssj1.1.1.t14', 'ssj1.1.1.t15', 'ssj1.1.1.t16', 'ssj1.1.1.t17', 'ssj1.1.1.t18', 'ssj1.1.1.t19', 'ssj1.1.1.t20', 'ssj1.1.1.t21', 'ssj1.1.1.t22', 'ssj1.1.1.t23', 'ssj1.1.1.t24'], 'words': ['"', 'Tistega', 'večera', 'sem', 'preveč', 'popil', ',', 'zgodilo', 'se', 'je', 'mesec', 'dni', 'po', 'tem', ',', 'ko', 'sem', 'izvedel', ',', 'da', 'me', 'žena', 'vara', '.'], 'lemmas': ['"', 'tisti', 'večer', 'biti', 'preveč', 'popiti', ',', 'zgoditi', 'se', 'biti', 'mesec', 'dan', 'po', 'ta', ',', 'ko', 'biti', 'izvedeti', ',', 'da', 'jaz', 'žena', 'varati', '.'], 'msds': ['UPosTag=PUNCT', 'UPosTag=DET|Case=Gen|Gender=Masc|Number=Sing|PronType=Dem', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Sing', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=DET|PronType=Ind', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=VERB|Aspect=Perf|Gender=Neut|Number=Sing|VerbForm=Part', 'UPosTag=PRON|PronType=Prs|Reflex=Yes|Variant=Short', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=NOUN|Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Plur', 'UPosTag=ADP|Case=Loc', 'UPosTag=DET|Case=Loc|Gender=Neut|Number=Sing|PronType=Dem', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=PRON|Case=Acc|Number=Sing|Person=1|PronType=Prs|Variant=Short', 'UPosTag=NOUN|Case=Nom|Gender=Fem|Number=Sing', 'UPosTag=VERB|Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'UPosTag=PUNCT'], 'has_ne_ann': True, 'has_ud_dep_ann': True, 'has_jos_dep_ann': True, 'has_srl_ann': True, 'has_mwe_ann': True, 'ne_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], 'ud_dep_head': [5, 2, 5, 5, 5, -1, 7, 5, 7, 7, 7, 10, 13, 10, 17, 17, 17, 13, 22, 22, 22, 22, 17, 5], 'ud_dep_rel': ['punct', 'det', 'obl', 'aux', 'advmod', 'root', 'punct', 'parataxis', 'expl', 'aux', 'obl', 'nmod', 'case', 'nmod', 'punct', 'mark', 'aux', 'acl', 'punct', 'mark', 'obj', 'nsubj', 'ccomp', 'punct'], 'jos_dep_head': [-1, 2, 5, 5, 5, -1, -1, -1, 7, 7, 7, 10, 13, 10, -1, 17, 17, 13, -1, 22, 22, 22, 17, -1], 'jos_dep_rel': ['Root', 'Atr', 'AdvO', 'PPart', 'AdvM', 'Root', 'Root', 'Root', 'PPart', 'PPart', 'AdvO', 'Atr', 'Atr', 'Atr', 'Root', 'Conj', 'PPart', 'Atr', 'Root', 'Conj', 'Obj', 'Sb', 'Obj', 'Root'], 'srl_info': [ {'idx_arg': 2, 'idx_head': 5, 'role': 'TIME'}, {'idx_arg': 4, 'idx_head': 5, 'role': 'QUANT'}, {'idx_arg': 10, 'idx_head': 7, 'role': 'TIME'}, {'idx_arg': 20, 'idx_head': 22, 'role': 'PAT'}, {'idx_arg': 21, 'idx_head': 22, 'role': 'ACT'}, {'idx_arg': 22, 'idx_head': 17, 'role': 'RESLT'} ], 'mwe_info': [ {'type': 'IRV', 'word_indices': [7, 8]} ] } ``` ### Data Fields The following attributes are present in the most general config (`all_data`). Please see below for attributes present in the specific configs. - `id_doc`: a string containing the identifier of the document; - `idx_par`: an int32 containing the consecutive number of the paragraph, which the current sentence is a part of; - `idx_sent`: an int32 containing the consecutive number of the current sentence inside the current paragraph; - `id_words`: a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149; - `words`: a list of strings containing the words in the current sentence; - `lemmas`: a list of strings containing the lemmas in the current sentence; - `msds`: a list of strings containing the morphosyntactic description of words in the current sentence; - `has_ne_ann`: a bool indicating whether the current example has named entities annotated; - `has_ud_dep_ann`: a bool indicating whether the current example has dependencies (in UD style) annotated; - `has_jos_dep_ann`: a bool indicating whether the current example has dependencies (in JOS style) annotated; - `has_srl_ann`: a bool indicating whether the current example has semantic roles annotated; - `has_mwe_ann`: a bool indicating whether the current example has multi-word expressions annotated; - `ne_tags`: a list of strings containing the named entity tags encoded using IOB2 - if `has_ne_ann=False` all tokens are annotated with `"N/A"`; - `ud_dep_head`: a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is `-1`; if `has_ud_dep_ann=False` all tokens are annotated with `-2`; - `ud_dep_rel`: a list of strings containing the relation with the head for each word (using UD guidelines) - if `has_ud_dep_ann=False` all tokens are annotated with `"N/A"`; - `jos_dep_head`: a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is `-1`; if `has_jos_dep_ann=False` all tokens are annotated with `-2`; - `jos_dep_rel`: a list of strings containing the relation with the head for each word (using JOS guidelines) - if `has_jos_dep_ann=False` all tokens are annotated with `"N/A"`; - `srl_info`: a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if `has_srl_ann=False` this list is empty; - `mwe_info`: a list of dicts, each containing word indices and the type of a multi-word expression; #### Data fields in 'named_entity_recognition' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ne_tags'] ``` #### Data fields in 'dependency_parsing_ud' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ud_dep_head', 'ud_dep_rel'] ``` #### Data fields in 'dependency_parsing_jos' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'jos_dep_head', 'jos_dep_rel'] ``` #### Data fields in 'semantic_role_labeling' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'srl_info'] ``` #### Data fields in 'multiword_expressions' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'mwe_info'] ``` ## Additional Information ### Dataset Curators Simon Krek; et al. (please see http://hdl.handle.net/11356/1434 for the full list) ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information The paper describing the dataset: ``` @InProceedings{krek2020ssj500k, title = {The ssj500k Training Corpus for Slovene Language Processing}, author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities}, year={2020}, pages={24-33} } ``` The resource itself: ``` @misc{krek2021clarinssj500k, title = {Training corpus ssj500k 2.3}, author = {Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Gantar, Polona and Kuzman, Taja and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja}, url = {http://hdl.handle.net/11356/1434}, year = {2021} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/ssj500k
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "task_ids:part-of-speech", "task_ids:lemmatization", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:found", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "language:sl", "license:cc-by-nc-sa-4.0", "semantic-role-labeling", "multiword-expression-detection", "region:us" ]
2022-09-22T05:31:03+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found", "expert-generated"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K", "10K<n<100K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition", "part-of-speech", "lemmatization", "parsing"], "pretty_name": "ssj500k", "tags": ["semantic-role-labeling", "multiword-expression-detection"]}
2022-12-09T08:58:50+00:00
[]
[ "sl" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #task_ids-lemmatization #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-10K<n<100K #language-Slovenian #license-cc-by-nc-sa-4.0 #semantic-role-labeling #multiword-expression-detection #region-us
# Dataset Card for ssj500k Important: there exists another HF implementation of the dataset (classla/ssj500k), but it seems to be more narrowly focused. This implementation is designed for more general use - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data. ### Dataset Summary The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks: - named entity recognition (config 'named_entity_recognition') - dependency parsing(*), Universal Dependencies style (config 'dependency_parsing_ud') - dependency parsing, JOS/MULTEXT-East style (config 'dependency_parsing_jos') - semantic role labeling (config 'semantic_role_labeling') - multi-word expressions (config 'multiword_expressions') If you want to load all the data along with their partial annotations, please use the config 'all_data'. \* _The UD dependency parsing labels are included here for completeness, but using the dataset universal_dependencies should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._ ### Supported Tasks and Leaderboards Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset (using the config 'all_data'): ### Data Fields The following attributes are present in the most general config ('all_data'). Please see below for attributes present in the specific configs. - 'id_doc': a string containing the identifier of the document; - 'idx_par': an int32 containing the consecutive number of the paragraph, which the current sentence is a part of; - 'idx_sent': an int32 containing the consecutive number of the current sentence inside the current paragraph; - 'id_words': a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149; - 'words': a list of strings containing the words in the current sentence; - 'lemmas': a list of strings containing the lemmas in the current sentence; - 'msds': a list of strings containing the morphosyntactic description of words in the current sentence; - 'has_ne_ann': a bool indicating whether the current example has named entities annotated; - 'has_ud_dep_ann': a bool indicating whether the current example has dependencies (in UD style) annotated; - 'has_jos_dep_ann': a bool indicating whether the current example has dependencies (in JOS style) annotated; - 'has_srl_ann': a bool indicating whether the current example has semantic roles annotated; - 'has_mwe_ann': a bool indicating whether the current example has multi-word expressions annotated; - 'ne_tags': a list of strings containing the named entity tags encoded using IOB2 - if 'has_ne_ann=False' all tokens are annotated with '"N/A"'; - 'ud_dep_head': a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is '-1'; if 'has_ud_dep_ann=False' all tokens are annotated with '-2'; - 'ud_dep_rel': a list of strings containing the relation with the head for each word (using UD guidelines) - if 'has_ud_dep_ann=False' all tokens are annotated with '"N/A"'; - 'jos_dep_head': a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is '-1'; if 'has_jos_dep_ann=False' all tokens are annotated with '-2'; - 'jos_dep_rel': a list of strings containing the relation with the head for each word (using JOS guidelines) - if 'has_jos_dep_ann=False' all tokens are annotated with '"N/A"'; - 'srl_info': a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if 'has_srl_ann=False' this list is empty; - 'mwe_info': a list of dicts, each containing word indices and the type of a multi-word expression; #### Data fields in 'named_entity_recognition' #### Data fields in 'dependency_parsing_ud' #### Data fields in 'dependency_parsing_jos' #### Data fields in 'semantic_role_labeling' #### Data fields in 'multiword_expressions' ## Additional Information ### Dataset Curators Simon Krek; et al. (please see URL for the full list) ### Licensing Information CC BY-NC-SA 4.0. The paper describing the dataset: The resource itself: ### Contributions Thanks to @matejklemen for adding this dataset.
[ "# Dataset Card for ssj500k\n\nImportant: there exists another HF implementation of the dataset (classla/ssj500k), but it seems to be more narrowly focused. This implementation is designed for more general use - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data.", "### Dataset Summary\n\nThe ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks:\n- named entity recognition (config 'named_entity_recognition') \n- dependency parsing(*), Universal Dependencies style (config 'dependency_parsing_ud') \n- dependency parsing, JOS/MULTEXT-East style (config 'dependency_parsing_jos') \n- semantic role labeling (config 'semantic_role_labeling') \n- multi-word expressions (config 'multiword_expressions') \n\nIf you want to load all the data along with their partial annotations, please use the config 'all_data'. \n\n\\* _The UD dependency parsing labels are included here for completeness, but using the dataset universal_dependencies should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._", "### Supported Tasks and Leaderboards\n\nSentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection.", "### Languages\n\nSlovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset (using the config 'all_data'):", "### Data Fields\n\nThe following attributes are present in the most general config ('all_data'). Please see below for attributes present in the specific configs. \n- 'id_doc': a string containing the identifier of the document; \n- 'idx_par': an int32 containing the consecutive number of the paragraph, which the current sentence is a part of; \n- 'idx_sent': an int32 containing the consecutive number of the current sentence inside the current paragraph; \n- 'id_words': a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149; \n- 'words': a list of strings containing the words in the current sentence; \n- 'lemmas': a list of strings containing the lemmas in the current sentence; \n- 'msds': a list of strings containing the morphosyntactic description of words in the current sentence; \n- 'has_ne_ann': a bool indicating whether the current example has named entities annotated; \n- 'has_ud_dep_ann': a bool indicating whether the current example has dependencies (in UD style) annotated; \n- 'has_jos_dep_ann': a bool indicating whether the current example has dependencies (in JOS style) annotated; \n- 'has_srl_ann': a bool indicating whether the current example has semantic roles annotated; \n- 'has_mwe_ann': a bool indicating whether the current example has multi-word expressions annotated; \n- 'ne_tags': a list of strings containing the named entity tags encoded using IOB2 - if 'has_ne_ann=False' all tokens are annotated with '\"N/A\"'; \n- 'ud_dep_head': a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is '-1'; if 'has_ud_dep_ann=False' all tokens are annotated with '-2'; \n- 'ud_dep_rel': a list of strings containing the relation with the head for each word (using UD guidelines) - if 'has_ud_dep_ann=False' all tokens are annotated with '\"N/A\"'; \n- 'jos_dep_head': a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is '-1'; if 'has_jos_dep_ann=False' all tokens are annotated with '-2'; \n- 'jos_dep_rel': a list of strings containing the relation with the head for each word (using JOS guidelines) - if 'has_jos_dep_ann=False' all tokens are annotated with '\"N/A\"'; \n- 'srl_info': a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if 'has_srl_ann=False' this list is empty; \n- 'mwe_info': a list of dicts, each containing word indices and the type of a multi-word expression;", "#### Data fields in 'named_entity_recognition'", "#### Data fields in 'dependency_parsing_ud'", "#### Data fields in 'dependency_parsing_jos'", "#### Data fields in 'semantic_role_labeling'", "#### Data fields in 'multiword_expressions'", "## Additional Information", "### Dataset Curators\n\nSimon Krek; et al. (please see URL for the full list)", "### Licensing Information\n\nCC BY-NC-SA 4.0.\n\n\n\nThe paper describing the dataset:\n\n\nThe resource itself:", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #task_ids-part-of-speech #task_ids-lemmatization #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #size_categories-10K<n<100K #language-Slovenian #license-cc-by-nc-sa-4.0 #semantic-role-labeling #multiword-expression-detection #region-us \n", "# Dataset Card for ssj500k\n\nImportant: there exists another HF implementation of the dataset (classla/ssj500k), but it seems to be more narrowly focused. This implementation is designed for more general use - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data.", "### Dataset Summary\n\nThe ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks:\n- named entity recognition (config 'named_entity_recognition') \n- dependency parsing(*), Universal Dependencies style (config 'dependency_parsing_ud') \n- dependency parsing, JOS/MULTEXT-East style (config 'dependency_parsing_jos') \n- semantic role labeling (config 'semantic_role_labeling') \n- multi-word expressions (config 'multiword_expressions') \n\nIf you want to load all the data along with their partial annotations, please use the config 'all_data'. \n\n\\* _The UD dependency parsing labels are included here for completeness, but using the dataset universal_dependencies should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._", "### Supported Tasks and Leaderboards\n\nSentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection.", "### Languages\n\nSlovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset (using the config 'all_data'):", "### Data Fields\n\nThe following attributes are present in the most general config ('all_data'). Please see below for attributes present in the specific configs. \n- 'id_doc': a string containing the identifier of the document; \n- 'idx_par': an int32 containing the consecutive number of the paragraph, which the current sentence is a part of; \n- 'idx_sent': an int32 containing the consecutive number of the current sentence inside the current paragraph; \n- 'id_words': a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149; \n- 'words': a list of strings containing the words in the current sentence; \n- 'lemmas': a list of strings containing the lemmas in the current sentence; \n- 'msds': a list of strings containing the morphosyntactic description of words in the current sentence; \n- 'has_ne_ann': a bool indicating whether the current example has named entities annotated; \n- 'has_ud_dep_ann': a bool indicating whether the current example has dependencies (in UD style) annotated; \n- 'has_jos_dep_ann': a bool indicating whether the current example has dependencies (in JOS style) annotated; \n- 'has_srl_ann': a bool indicating whether the current example has semantic roles annotated; \n- 'has_mwe_ann': a bool indicating whether the current example has multi-word expressions annotated; \n- 'ne_tags': a list of strings containing the named entity tags encoded using IOB2 - if 'has_ne_ann=False' all tokens are annotated with '\"N/A\"'; \n- 'ud_dep_head': a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is '-1'; if 'has_ud_dep_ann=False' all tokens are annotated with '-2'; \n- 'ud_dep_rel': a list of strings containing the relation with the head for each word (using UD guidelines) - if 'has_ud_dep_ann=False' all tokens are annotated with '\"N/A\"'; \n- 'jos_dep_head': a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is '-1'; if 'has_jos_dep_ann=False' all tokens are annotated with '-2'; \n- 'jos_dep_rel': a list of strings containing the relation with the head for each word (using JOS guidelines) - if 'has_jos_dep_ann=False' all tokens are annotated with '\"N/A\"'; \n- 'srl_info': a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if 'has_srl_ann=False' this list is empty; \n- 'mwe_info': a list of dicts, each containing word indices and the type of a multi-word expression;", "#### Data fields in 'named_entity_recognition'", "#### Data fields in 'dependency_parsing_ud'", "#### Data fields in 'dependency_parsing_jos'", "#### Data fields in 'semantic_role_labeling'", "#### Data fields in 'multiword_expressions'", "## Additional Information", "### Dataset Curators\n\nSimon Krek; et al. (please see URL for the full list)", "### Licensing Information\n\nCC BY-NC-SA 4.0.\n\n\n\nThe paper describing the dataset:\n\n\nThe resource itself:", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
80845435ce686b8a9dbf70a05452fbfb8e09cdd7
# Dataset Card for Fashionpedia ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://fashionpedia.github.io/home/index.html - **Repository:** https://github.com/cvdfoundation/fashionpedia - **Paper:** https://arxiv.org/abs/2004.12276 ### Dataset Summary Fashionpedia is a dataset mapping out the visual aspects of the fashion world. From the paper: > Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology. Fashionpedia has: - 46781 images - 342182 bounding-boxes ### Supported Tasks - Object detection - Image classification ### Languages All of annotations use English as primary language. ## Dataset Structure The dataset is structured as follows: ```py DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 45623 }) val: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1158 }) }) ``` ### Data Instances An example of the data for one image is: ```py {'image_id': 23, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=682x1024>, 'width': 682, 'height': 1024, 'objects': {'bbox_id': [150311, 150312, 150313, 150314], 'category': [23, 23, 33, 10], 'bbox': [[445.0, 910.0, 505.0, 983.0], [239.0, 940.0, 284.0, 994.0], [298.0, 282.0, 386.0, 352.0], [210.0, 282.0, 448.0, 665.0]], 'area': [1422, 843, 373, 56375]}} ``` With the type of each field being defined as: ```py {'image_id': Value(dtype='int64'), 'image': Image(decode=True), 'width': Value(dtype='int64'), 'height': Value(dtype='int64'), 'objects': Sequence(feature={ 'bbox_id': Value(dtype='int64'), 'category': ClassLabel(num_classes=46, names=['shirt, blouse', 'top, t-shirt, sweatshirt', 'sweater', 'cardigan', 'jacket', 'vest', 'pants', 'shorts', 'skirt', 'coat', 'dress', 'jumpsuit', 'cape', 'glasses', 'hat', 'headband, head covering, hair accessory', 'tie', 'glove', 'watch', 'belt', 'leg warmer', 'tights, stockings', 'sock', 'shoe', 'bag, wallet', 'scarf', 'umbrella', 'hood', 'collar', 'lapel', 'epaulette', 'sleeve', 'pocket', 'neckline', 'buckle', 'zipper', 'applique', 'bead', 'bow', 'flower', 'fringe', 'ribbon', 'rivet', 'ruffle', 'sequin', 'tassel']), 'bbox': Sequence(feature=Value(dtype='float64'), length=4), 'area': Value(dtype='int64')}, length=-1)} ``` ### Data Fields The dataset has the following fields: - `image_id`: Unique numeric ID of the image. - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `width`: Image width. - `height`: Image height. - `objects`: A dictionary containing bounding box metadata for the objects in the image: - `bbox_id`: Unique numeric ID of the bounding box annotation. - `category`: The object’s category. - `area`: The area of the bounding box. - `bbox`: The object’s bounding box (in the Pascal VOC format) ### Data Splits | | Train | Validation | Test | |----------------|--------|------------|------| | Images | 45623 | 1158 | 0 | | Bounding boxes | 333401 | 8781 | 0 | ## Additional Information ### Licensing Information Fashionpedia is licensed under a Creative Commons Attribution 4.0 International License. ### Citation Information ``` @inproceedings{jia2020fashionpedia, title={Fashionpedia: Ontology, Segmentation, and an Attribute Localization Dataset}, author={Jia, Menglin and Shi, Mengyun and Sirotenko, Mikhail and Cui, Yin and Cardie, Claire and Hariharan, Bharath and Adam, Hartwig and Belongie, Serge} booktitle={European Conference on Computer Vision (ECCV)}, year={2020} } ``` ### Contributions Thanks to [@blinjrm](https://github.com/blinjrm) for adding this dataset.
detection-datasets/fashionpedia
[ "task_categories:object-detection", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "object-detection", "fashion", "computer-vision", "arxiv:2004.12276", "region:us" ]
2022-09-22T09:33:24+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["object-detection"], "paperswithcode_id": "fashionpedia", "pretty_name": "Fashionpedia", "tags": ["object-detection", "fashion", "computer-vision"]}
2022-09-22T12:22:02+00:00
[ "2004.12276" ]
[ "en" ]
TAGS #task_categories-object-detection #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #object-detection #fashion #computer-vision #arxiv-2004.12276 #region-us
Dataset Card for Fashionpedia ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Additional Information + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL ### Dataset Summary Fashionpedia is a dataset mapping out the visual aspects of the fashion world. From the paper: > > Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology. > Fashionpedia has: > > > * 46781 images * 342182 bounding-boxes ### Supported Tasks * Object detection * Image classification ### Languages All of annotations use English as primary language. Dataset Structure ----------------- The dataset is structured as follows: ### Data Instances An example of the data for one image is: With the type of each field being defined as: ### Data Fields The dataset has the following fields: * 'image\_id': Unique numeric ID of the image. * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' * 'width': Image width. * 'height': Image height. * 'objects': A dictionary containing bounding box metadata for the objects in the image: + 'bbox\_id': Unique numeric ID of the bounding box annotation. + 'category': The object’s category. + 'area': The area of the bounding box. + 'bbox': The object’s bounding box (in the Pascal VOC format) ### Data Splits Additional Information ---------------------- ### Licensing Information Fashionpedia is licensed under a Creative Commons Attribution 4.0 International License. ### Contributions Thanks to @blinjrm for adding this dataset.
[ "### Dataset Summary\n\n\nFashionpedia is a dataset mapping out the visual aspects of the fashion world.\nFrom the paper:\n\n\n\n> \n> Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology.\n> Fashionpedia has:\n> \n> \n> \n\n\n* 46781 images\n* 342182 bounding-boxes", "### Supported Tasks\n\n\n* Object detection\n* Image classification", "### Languages\n\n\nAll of annotations use English as primary language.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is structured as follows:", "### Data Instances\n\n\nAn example of the data for one image is:\n\n\nWith the type of each field being defined as:", "### Data Fields\n\n\nThe dataset has the following fields:\n\n\n* 'image\\_id': Unique numeric ID of the image.\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': Image width.\n* 'height': Image height.\n* 'objects': A dictionary containing bounding box metadata for the objects in the image:\n\t+ 'bbox\\_id': Unique numeric ID of the bounding box annotation.\n\t+ 'category': The object’s category.\n\t+ 'area': The area of the bounding box.\n\t+ 'bbox': The object’s bounding box (in the Pascal VOC format)", "### Data Splits\n\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nFashionpedia is licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @blinjrm for adding this dataset." ]
[ "TAGS\n#task_categories-object-detection #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #object-detection #fashion #computer-vision #arxiv-2004.12276 #region-us \n", "### Dataset Summary\n\n\nFashionpedia is a dataset mapping out the visual aspects of the fashion world.\nFrom the paper:\n\n\n\n> \n> Fashionpedia is a new dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology.\n> Fashionpedia has:\n> \n> \n> \n\n\n* 46781 images\n* 342182 bounding-boxes", "### Supported Tasks\n\n\n* Object detection\n* Image classification", "### Languages\n\n\nAll of annotations use English as primary language.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is structured as follows:", "### Data Instances\n\n\nAn example of the data for one image is:\n\n\nWith the type of each field being defined as:", "### Data Fields\n\n\nThe dataset has the following fields:\n\n\n* 'image\\_id': Unique numeric ID of the image.\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'width': Image width.\n* 'height': Image height.\n* 'objects': A dictionary containing bounding box metadata for the objects in the image:\n\t+ 'bbox\\_id': Unique numeric ID of the bounding box annotation.\n\t+ 'category': The object’s category.\n\t+ 'area': The area of the bounding box.\n\t+ 'bbox': The object’s bounding box (in the Pascal VOC format)", "### Data Splits\n\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nFashionpedia is licensed under a Creative Commons Attribution 4.0 International License.", "### Contributions\n\n\nThanks to @blinjrm for adding this dataset." ]
871826e171a2cf997849318707f1a6970bc53be6
This data set is created by randomly sampling 1M documents from [the large supervised proportional mixture](https://github.com/google-research/text-to-text-transfer-transformer/blob/733428af1c961e09ea0b7292ad9ac9e0e001f8a5/t5/data/mixtures.py#L193) from the [T5](https://github.com/google-research/text-to-text-transfer-transformer) repository. The code to produce this sampled dataset can be found [here](https://github.com/chenyu-jiang/text-to-text-transfer-transformer/blob/main/prepare_dataset.py).
jchenyu/t5_large_supervised_proportional_1M
[ "license:apache-2.0", "region:us" ]
2022-09-22T10:21:39+00:00
{"license": "apache-2.0"}
2022-09-22T10:35:08+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This data set is created by randomly sampling 1M documents from the large supervised proportional mixture from the T5 repository. The code to produce this sampled dataset can be found here.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
2db8cc29752777441ed3bed7ca97352171059550
# Dataset Card for SemCor ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://web.eecs.umich.edu/~mihalcea/downloads.html#semcor - **Repository:** - **Paper:** https://aclanthology.org/H93-1061/ - **Leaderboard:** - **Point of Contact:** ### Dataset Summary SemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to WordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton University. Some (few) word senses from WordNet 1.6 were dropped, and therefore they cannot be retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to symbolize a missing sense in WordNet 3.0. The automatic mapping was performed within the Language and Information Technologies lab at UNT, by Rada Mihalcea ([email protected]). THIS MAPPING IS PROVIDED "AS IS" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. In agreement with the license from Princeton Univerisity, you are granted permission to use, copy, modify and distribute this database for any purpose and without fee and royalty is hereby granted, provided that you agree to comply with the Princeton copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the database, including modifications that you make for internal use or for distribution. Both LICENSE and README files distributed with the SemCor 1.6 package are included in the current distribution of SemCor 3.0. ### Languages English ## Additional Information ### Licensing Information WordNet Release 1.6 Semantic Concordance Release 1.6 This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution. WordNet 1.6 Copyright 1997 by Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same. ### Citation Information ```bibtex @inproceedings{miller-etal-1993-semantic, title = "A Semantic Concordance", author = "Miller, George A. and Leacock, Claudia and Tengi, Randee and Bunker, Ross T.", booktitle = "{H}uman {L}anguage {T}echnology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993", year = "1993", url = "https://aclanthology.org/H93-1061", } ``` ### Contributions Thanks to [@thesofakillers](https://github.com/thesofakillers) for adding this dataset, converting from xml to csv.
thesofakillers/SemCor
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:other", "word sense disambiguation", "semcor", "wordnet", "region:us" ]
2022-09-22T12:31:04+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "pretty_name": "SemCor", "tags": ["word sense disambiguation", "semcor", "wordnet"]}
2022-10-12T07:46:28+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #word sense disambiguation #semcor #wordnet #region-us
# Dataset Card for SemCor ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary SemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to WordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton University. Some (few) word senses from WordNet 1.6 were dropped, and therefore they cannot be retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to symbolize a missing sense in WordNet 3.0. The automatic mapping was performed within the Language and Information Technologies lab at UNT, by Rada Mihalcea (rada@URL). THIS MAPPING IS PROVIDED "AS IS" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. In agreement with the license from Princeton Univerisity, you are granted permission to use, copy, modify and distribute this database for any purpose and without fee and royalty is hereby granted, provided that you agree to comply with the Princeton copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the database, including modifications that you make for internal use or for distribution. Both LICENSE and README files distributed with the SemCor 1.6 package are included in the current distribution of SemCor 3.0. ### Languages English ## Additional Information ### Licensing Information WordNet Release 1.6 Semantic Concordance Release 1.6 This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution. WordNet 1.6 Copyright 1997 by Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same. ### Contributions Thanks to @thesofakillers for adding this dataset, converting from xml to csv.
[ "# Dataset Card for SemCor", "## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nSemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to\nWordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton\nUniversity.\n\nSome (few) word senses from WordNet 1.6 were dropped, and therefore they cannot\nbe retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to\nsymbolize a missing sense in WordNet 3.0.\n\nThe automatic mapping was performed within the Language and Information\nTechnologies lab at UNT, by Rada Mihalcea (rada@URL).\n\nTHIS MAPPING IS PROVIDED \"AS IS\" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES,\nEXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO\nREPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR\nPURPOSE.\n\nIn agreement with the license from Princeton Univerisity, you are granted\npermission to use, copy, modify and distribute this database \nfor any purpose and without fee and royalty is hereby granted, provided that you\nagree to comply with the Princeton copyright notice and statements, including\nthe disclaimer, and that the same appear on ALL copies of the database,\nincluding modifications that you make for internal \nuse or for distribution. \nBoth LICENSE and README files distributed with the SemCor 1.6 package are\nincluded in the current distribution of SemCor 3.0.", "### Languages\n\nEnglish", "## Additional Information", "### Licensing Information\n\nWordNet Release 1.6 Semantic Concordance Release 1.6\n\nThis software and database is being provided to you, the LICENSEE, by \nPrinceton University under the following license. By obtaining, using \nand/or copying this software and database, you agree that you have \nread, understood, and will comply with these terms and conditions.:\n\nPermission to use, copy, modify and distribute this software and \ndatabase and its documentation for any purpose and without fee or \nroyalty is hereby granted, provided that you agree to comply with \nthe following copyright notice and statements, including the disclaimer, \nand that the same appear on ALL copies of the software, database and \ndocumentation, including modifications that you make for internal \nuse or for distribution.\n\nWordNet 1.6 Copyright 1997 by Princeton University. All rights reserved.\n\nTHIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON \nUNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR \nIMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON \nUNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- \nABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE \nOF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT \nINFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR \nOTHER RIGHTS.\n\nThe name of Princeton University or Princeton may not be used in \nadvertising or publicity pertaining to distribution of the software \nand/or database. Title to copyright in this software, database and \nany associated documentation shall at all times remain with \nPrinceton University and LICENSEE agrees to preserve same.", "### Contributions\n\nThanks to @thesofakillers for adding this\ndataset, converting from xml to csv." ]
[ "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #word sense disambiguation #semcor #wordnet #region-us \n", "# Dataset Card for SemCor", "## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nSemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to\nWordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton\nUniversity.\n\nSome (few) word senses from WordNet 1.6 were dropped, and therefore they cannot\nbe retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to\nsymbolize a missing sense in WordNet 3.0.\n\nThe automatic mapping was performed within the Language and Information\nTechnologies lab at UNT, by Rada Mihalcea (rada@URL).\n\nTHIS MAPPING IS PROVIDED \"AS IS\" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES,\nEXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO\nREPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR\nPURPOSE.\n\nIn agreement with the license from Princeton Univerisity, you are granted\npermission to use, copy, modify and distribute this database \nfor any purpose and without fee and royalty is hereby granted, provided that you\nagree to comply with the Princeton copyright notice and statements, including\nthe disclaimer, and that the same appear on ALL copies of the database,\nincluding modifications that you make for internal \nuse or for distribution. \nBoth LICENSE and README files distributed with the SemCor 1.6 package are\nincluded in the current distribution of SemCor 3.0.", "### Languages\n\nEnglish", "## Additional Information", "### Licensing Information\n\nWordNet Release 1.6 Semantic Concordance Release 1.6\n\nThis software and database is being provided to you, the LICENSEE, by \nPrinceton University under the following license. By obtaining, using \nand/or copying this software and database, you agree that you have \nread, understood, and will comply with these terms and conditions.:\n\nPermission to use, copy, modify and distribute this software and \ndatabase and its documentation for any purpose and without fee or \nroyalty is hereby granted, provided that you agree to comply with \nthe following copyright notice and statements, including the disclaimer, \nand that the same appear on ALL copies of the software, database and \ndocumentation, including modifications that you make for internal \nuse or for distribution.\n\nWordNet 1.6 Copyright 1997 by Princeton University. All rights reserved.\n\nTHIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON \nUNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR \nIMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON \nUNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- \nABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE \nOF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT \nINFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR \nOTHER RIGHTS.\n\nThe name of Princeton University or Princeton may not be used in \nadvertising or publicity pertaining to distribution of the software \nand/or database. Title to copyright in this software, database and \nany associated documentation shall at all times remain with \nPrinceton University and LICENSEE agrees to preserve same.", "### Contributions\n\nThanks to @thesofakillers for adding this\ndataset, converting from xml to csv." ]
63aac2cc0638acf1d69b9e1fb0a1b615da567550
# Dataset Card for sd-nlp ## Table of Contents - [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://sourcedata.embo.org - **Repository:** https://github.com/source-data/soda-roberta - **Paper:** - **Leaderboard:** - **Point of Contact:** [email protected], [email protected] ### Dataset Summary This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models. Additional details at https://github.com/source-data/soda-roberta ### Supported Tasks and Leaderboards Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)). `PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends. `NER`: biological and chemical entities are labeled. Specifically the following entities are tagged: - `SMALL_MOLECULE`: small molecules - `GENEPROD`: gene products (genes and proteins) - `SUBCELLULAR`: subcellular components - `CELL`: cell types and cell lines. - `TISSUE`: tissues and organs - `ORGANISM`: species - `EXP_ASSAY`: experimental assays `ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are: - `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations. - `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements. `BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...). ### Languages The text in the dataset is English. ## Dataset Structure ### Data Instances ```json {'text': '(E) Quantification of the number of cells without γ-Tubulin at centrosomes (γ-Tub -) in pachytene and diplotene spermatocytes in control, Plk1(∆/∆) and BI2536-treated spermatocytes. Data represent average of two biological replicates per condition. ', 'labels': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 14, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4, 0, 0, 0, 0, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 0, 0, 3, 4, 4, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} ``` ### Data Fields - `text`: `str` of the text - `label_ids` dictionary composed of list of strings on a character-level: - `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]` - `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]` ### Data Splits ```python DatasetDict({ train: Dataset({ features: ['text', 'labels'], num_rows: 66085 }) test: Dataset({ features: ['text', 'labels'], num_rows: 8225 }) validation: Dataset({ features: ['text', 'labels'], num_rows: 7948 }) }) ``` ## Dataset Creation ### Curation Rationale The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition. ### Source Data #### Initial Data Collection and Normalization Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021. #### Who are the source language producers? The examples are extracted from the figure legends from scientific papers in cell and molecular biology. ### Annotations #### Annotation process The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org) #### Who are the annotators? Curators of the SourceData project. ### Personal and Sensitive Information None known. ## Considerations for Using the Data ### Social Impact of Dataset Not applicable. ### Discussion of Biases The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org) ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Thomas Lemberger, EMBO. ### Licensing Information CC BY 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset.
EMBO/sd-character-level-ner
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:named-entity-recognition", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "region:us" ]
2022-09-22T12:57:31+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-classification", "structure-prediction"], "task_ids": ["multi-class-classification", "named-entity-recognition", "parsing"]}
2022-10-23T05:41:24+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-named-entity-recognition #task_ids-parsing #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
# Dataset Card for sd-nlp ## Table of Contents - [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name) - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Initial Data Collection and Normalization - Who are the source language producers? - Annotations - Annotation process - Who are the annotators? - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: thomas.lemberger@URL, URL@URL ### Dataset Summary This dataset is based on the content of the SourceData (URL) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, URL Unlike the dataset 'sd-nlp', pre-tokenized with the 'roberta-base' tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models. Additional details at URL ### Supported Tasks and Leaderboards Tags are provided as IOB2-style tags). 'PANELIZATION': figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. 'PANELIZATION' provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends. 'NER': biological and chemical entities are labeled. Specifically the following entities are tagged: - 'SMALL_MOLECULE': small molecules - 'GENEPROD': gene products (genes and proteins) - 'SUBCELLULAR': subcellular components - 'CELL': cell types and cell lines. - 'TISSUE': tissues and organs - 'ORGANISM': species - 'EXP_ASSAY': experimental assays 'ROLES': the role of entities with regard to the causal hypotheses tested in the reported results. The tags are: - 'CONTROLLED_VAR': entities that are associated with experimental variables and that subjected to controlled and targeted perturbations. - 'MEASURED_VAR': entities that are associated with the variables measured and the object of the measurements. 'BORING': entities are marked with the tag 'BORING' when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...). ### Languages The text in the dataset is English. ## Dataset Structure ### Data Instances ### Data Fields - 'text': 'str' of the text - 'label_ids' dictionary composed of list of strings on a character-level: - 'entity_types': 'list' of 'strings' for the IOB2 tags for entity type; possible value in '["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]' - 'panel_start': 'list' of 'strings' for IOB2 tags '["O", "B-PANEL_START"]' ### Data Splits ## Dataset Creation ### Curation Rationale The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition. ### Source Data #### Initial Data Collection and Normalization Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, URL The curation tool at URL was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (URL) on 21 Jan 2021. #### Who are the source language producers? The examples are extracted from the figure legends from scientific papers in cell and molecular biology. ### Annotations #### Annotation process The annotations were produced manually with expert curators from the SourceData project (URL) #### Who are the annotators? Curators of the SourceData project. ### Personal and Sensitive Information None known. ## Considerations for Using the Data ### Social Impact of Dataset Not applicable. ### Discussion of Biases The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (URL) ### Other Known Limitations ## Additional Information ### Dataset Curators Thomas Lemberger, EMBO. ### Licensing Information CC BY 4.0 ### Contributions Thanks to @tlemberger and @drAbreu for adding this dataset.
[ "# Dataset Card for sd-nlp", "## Table of Contents\n- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact: thomas.lemberger@URL, URL@URL", "### Dataset Summary\nThis dataset is based on the content of the SourceData (URL) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, URL \nUnlike the dataset 'sd-nlp', pre-tokenized with the 'roberta-base' tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models. \nAdditional details at URL", "### Supported Tasks and Leaderboards\nTags are provided as IOB2-style tags).\n'PANELIZATION': figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. 'PANELIZATION' provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.\n'NER': biological and chemical entities are labeled. Specifically the following entities are tagged:\n- 'SMALL_MOLECULE': small molecules\n- 'GENEPROD': gene products (genes and proteins)\n- 'SUBCELLULAR': subcellular components\n- 'CELL': cell types and cell lines.\n- 'TISSUE': tissues and organs\n- 'ORGANISM': species\n- 'EXP_ASSAY': experimental assays\n'ROLES': the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:\n- 'CONTROLLED_VAR': entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.\n- 'MEASURED_VAR': entities that are associated with the variables measured and the object of the measurements.\n'BORING': entities are marked with the tag 'BORING' when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).", "### Languages\nThe text in the dataset is English.", "## Dataset Structure", "### Data Instances", "### Data Fields\n- 'text': 'str' of the text\n- 'label_ids' dictionary composed of list of strings on a character-level:\n - 'entity_types': 'list' of 'strings' for the IOB2 tags for entity type; possible value in '[\"O\", \"I-SMALL_MOLECULE\", \"B-SMALL_MOLECULE\", \"I-GENEPROD\", \"B-GENEPROD\", \"I-SUBCELLULAR\", \"B-SUBCELLULAR\", \"I-CELL\", \"B-CELL\", \"I-TISSUE\", \"B-TISSUE\", \"I-ORGANISM\", \"B-ORGANISM\", \"I-EXP_ASSAY\", \"B-EXP_ASSAY\"]'\n - 'panel_start': 'list' of 'strings' for IOB2 tags '[\"O\", \"B-PANEL_START\"]'", "### Data Splits", "## Dataset Creation", "### Curation Rationale\nThe dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition.", "### Source Data", "#### Initial Data Collection and Normalization\nFigure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, URL The curation tool at URL was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (URL) on 21 Jan 2021.", "#### Who are the source language producers?\nThe examples are extracted from the figure legends from scientific papers in cell and molecular biology.", "### Annotations", "#### Annotation process\nThe annotations were produced manually with expert curators from the SourceData project (URL)", "#### Who are the annotators?\nCurators of the SourceData project.", "### Personal and Sensitive Information\nNone known.", "## Considerations for Using the Data", "### Social Impact of Dataset\nNot applicable.", "### Discussion of Biases\nThe examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (URL)", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\nThomas Lemberger, EMBO.", "### Licensing Information\nCC BY 4.0", "### Contributions\nThanks to @tlemberger and @drAbreu for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-named-entity-recognition #task_ids-parsing #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for sd-nlp", "## Table of Contents\n- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact: thomas.lemberger@URL, URL@URL", "### Dataset Summary\nThis dataset is based on the content of the SourceData (URL) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, URL \nUnlike the dataset 'sd-nlp', pre-tokenized with the 'roberta-base' tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models. \nAdditional details at URL", "### Supported Tasks and Leaderboards\nTags are provided as IOB2-style tags).\n'PANELIZATION': figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. 'PANELIZATION' provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.\n'NER': biological and chemical entities are labeled. Specifically the following entities are tagged:\n- 'SMALL_MOLECULE': small molecules\n- 'GENEPROD': gene products (genes and proteins)\n- 'SUBCELLULAR': subcellular components\n- 'CELL': cell types and cell lines.\n- 'TISSUE': tissues and organs\n- 'ORGANISM': species\n- 'EXP_ASSAY': experimental assays\n'ROLES': the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:\n- 'CONTROLLED_VAR': entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.\n- 'MEASURED_VAR': entities that are associated with the variables measured and the object of the measurements.\n'BORING': entities are marked with the tag 'BORING' when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).", "### Languages\nThe text in the dataset is English.", "## Dataset Structure", "### Data Instances", "### Data Fields\n- 'text': 'str' of the text\n- 'label_ids' dictionary composed of list of strings on a character-level:\n - 'entity_types': 'list' of 'strings' for the IOB2 tags for entity type; possible value in '[\"O\", \"I-SMALL_MOLECULE\", \"B-SMALL_MOLECULE\", \"I-GENEPROD\", \"B-GENEPROD\", \"I-SUBCELLULAR\", \"B-SUBCELLULAR\", \"I-CELL\", \"B-CELL\", \"I-TISSUE\", \"B-TISSUE\", \"I-ORGANISM\", \"B-ORGANISM\", \"I-EXP_ASSAY\", \"B-EXP_ASSAY\"]'\n - 'panel_start': 'list' of 'strings' for IOB2 tags '[\"O\", \"B-PANEL_START\"]'", "### Data Splits", "## Dataset Creation", "### Curation Rationale\nThe dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition.", "### Source Data", "#### Initial Data Collection and Normalization\nFigure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, URL The curation tool at URL was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (URL) on 21 Jan 2021.", "#### Who are the source language producers?\nThe examples are extracted from the figure legends from scientific papers in cell and molecular biology.", "### Annotations", "#### Annotation process\nThe annotations were produced manually with expert curators from the SourceData project (URL)", "#### Who are the annotators?\nCurators of the SourceData project.", "### Personal and Sensitive Information\nNone known.", "## Considerations for Using the Data", "### Social Impact of Dataset\nNot applicable.", "### Discussion of Biases\nThe examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (URL)", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\nThomas Lemberger, EMBO.", "### Licensing Information\nCC BY 4.0", "### Contributions\nThanks to @tlemberger and @drAbreu for adding this dataset." ]
4a706ce4d084ae644acb17bac7fd0919e493dbeb
# Dataset Card for Fashionpedia_4_categories This dataset is a variation of the fashionpedia dataset available [here](https://huggingface.co/datasets/detection-datasets/fashionpedia), with 2 key differences: - It contains only 4 categories: - Clothing - Shoes - Bags - Accessories - New splits were created: - Train: 90% of the images - Val: 5% - Test 5% The goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset. This dataset was created using the `detection_datasets` library ([GitHub](https://github.com/blinjrm/detection-datasets), [PyPI](https://pypi.org/project/detection-datasets/)), you can check here the full creation [notebook](https://blinjrm.github.io/detection-datasets/tutorials/2_Transform/). In a nutshell, the following mapping was applied: ```Python mapping = { 'shirt, blouse': 'clothing', 'top, t-shirt, sweatshirt': 'clothing', 'sweater': 'clothing', 'cardigan': 'clothing', 'jacket': 'clothing', 'vest': 'clothing', 'pants': 'clothing', 'shorts': 'clothing', 'skirt': 'clothing', 'coat': 'clothing', 'dress': 'clothing', 'jumpsuit': 'clothing', 'cape': 'clothing', 'glasses': 'accessories', 'hat': 'accessories', 'headband, head covering, hair accessory': 'accessories', 'tie': 'accessories', 'glove': 'accessories', 'belt': 'accessories', 'tights, stockings': 'accessories', 'sock': 'accessories', 'shoe': 'shoes', 'bag, wallet': 'bags', 'scarf': 'accessories', } ``` As a result, annotations with no category equivalent in the mapping have been dropped.
detection-datasets/fashionpedia_4_categories
[ "task_categories:object-detection", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:fashionpedia", "language:en", "license:cc-by-4.0", "object-detection", "fashion", "computer-vision", "region:us" ]
2022-09-22T13:09:27+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["fashionpedia"], "task_categories": ["object-detection"], "paperswithcode_id": "fashionpedia", "pretty_name": "Fashionpedia_4_categories", "tags": ["object-detection", "fashion", "computer-vision"]}
2022-09-22T13:45:18+00:00
[]
[ "en" ]
TAGS #task_categories-object-detection #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-fashionpedia #language-English #license-cc-by-4.0 #object-detection #fashion #computer-vision #region-us
# Dataset Card for Fashionpedia_4_categories This dataset is a variation of the fashionpedia dataset available here, with 2 key differences: - It contains only 4 categories: - Clothing - Shoes - Bags - Accessories - New splits were created: - Train: 90% of the images - Val: 5% - Test 5% The goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset. This dataset was created using the 'detection_datasets' library (GitHub, PyPI), you can check here the full creation notebook. In a nutshell, the following mapping was applied: As a result, annotations with no category equivalent in the mapping have been dropped.
[ "# Dataset Card for Fashionpedia_4_categories\n\nThis dataset is a variation of the fashionpedia dataset available here, with 2 key differences:\n- It contains only 4 categories:\n - Clothing\n - Shoes\n - Bags\n - Accessories\n- New splits were created:\n - Train: 90% of the images\n - Val: 5%\n - Test 5%\n\nThe goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset.\n\nThis dataset was created using the 'detection_datasets' library (GitHub, PyPI), you can check here the full creation notebook.\n\nIn a nutshell, the following mapping was applied:\n\n\nAs a result, annotations with no category equivalent in the mapping have been dropped." ]
[ "TAGS\n#task_categories-object-detection #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-fashionpedia #language-English #license-cc-by-4.0 #object-detection #fashion #computer-vision #region-us \n", "# Dataset Card for Fashionpedia_4_categories\n\nThis dataset is a variation of the fashionpedia dataset available here, with 2 key differences:\n- It contains only 4 categories:\n - Clothing\n - Shoes\n - Bags\n - Accessories\n- New splits were created:\n - Train: 90% of the images\n - Val: 5%\n - Test 5%\n\nThe goal is to make the detection task easier with 4 categories instead of 46 for the full fashionpedia dataset.\n\nThis dataset was created using the 'detection_datasets' library (GitHub, PyPI), you can check here the full creation notebook.\n\nIn a nutshell, the following mapping was applied:\n\n\nAs a result, annotations with no category equivalent in the mapping have been dropped." ]
2e7fdae1b8a959fa70bdadea392312869a02c744
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-6f9c29-1531855204
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T13:15:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["accuracy"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-22T14:17:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
ad46e5b6677b9bd3aa6368c688dac0fc30d5e4ca
Large file storage for the paper `Convergent Representations of Computer Programs in Human and Artificial Neural Networks` by Shashank Srikant*, Benjamin Lipkin*, Anna A. Ivanova, Evelina Fedorenko, and Una-May O'Reilly. The code repository is hosted on [GitHub](https://github.com/ALFA-group/code-representations-ml-brain). Check it out! If you use this work, please cite: ```bibtex @inproceedings{SrikantLipkin2022, author = {Srikant, Shashank and Lipkin, Benjamin and Ivanova, Anna and Fedorenko, Evelina and O'Reilly, Una-May}, title = {Convergent Representations of Computer Programs in Human and Artificial Neural Networks}, year = {2022}, journal = {Advances in Neural Information Processing Systems}, } ```
benlipkin/braincode-neurips2022
[ "license:mit", "region:us" ]
2022-09-22T13:17:03+00:00
{"license": "mit"}
2022-09-22T16:24:45+00:00
[]
[]
TAGS #license-mit #region-us
Large file storage for the paper 'Convergent Representations of Computer Programs in Human and Artificial Neural Networks' by Shashank Srikant*, Benjamin Lipkin*, Anna A. Ivanova, Evelina Fedorenko, and Una-May O'Reilly. The code repository is hosted on GitHub. Check it out! If you use this work, please cite:
[]
[ "TAGS\n#license-mit #region-us \n" ]
9623e24bcc3da5ec8a7ab5ed6b194294d6a18358
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: samsum * Config: samsum * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-61187c-1532155205
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T13:42:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-22T15:40:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2 * Dataset: samsum * Config: samsum * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: samsum\n* Config: samsum\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tune_sum_V2\n* Dataset: samsum\n* Config: samsum\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
8178d8c493897dc0cf759dd21413c118c0423718
[source](https://github.com/wangle1218/KBQA-for-Diagnosis/tree/main/nlu/bert_intent_recognition/data)
nlp-guild/intent-recognition-biomedical
[ "license:mit", "region:us" ]
2022-09-22T15:10:30+00:00
{"license": "mit"}
2022-09-22T15:13:44+00:00
[]
[]
TAGS #license-mit #region-us
source
[]
[ "TAGS\n#license-mit #region-us \n" ]
40bdb13a08d7acbfdefc8757fcf8992b7963e060
# Dataset Card for "gradio-dependents" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/gradio-dependents
[ "region:us" ]
2022-09-22T19:32:03+00:00
{"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 2413, "num_examples": 60}, {"name": "repository", "num_bytes": 185253, "num_examples": 3926}], "download_size": 112345, "dataset_size": 187666}}
2024-02-16T20:56:10+00:00
[]
[]
TAGS #region-us
# Dataset Card for "gradio-dependents" More Information needed
[ "# Dataset Card for \"gradio-dependents\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"gradio-dependents\"\n\nMore Information needed" ]
aec7dd1b87ea54c67b2823ba5fc09c2b9ede8f6e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ded028-2312
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T19:55:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-22T20:03:51+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
6abfd356ba7ac593c607c0fee3f8666e39db69a6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-ab10d5-2413
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T20:11:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-22T20:12:01+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
62eddd2262a1357f9574f59f54a6eac7794e6d07
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-914f2c-2514
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T20:16:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-22T21:03:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
6c3ed433023c6b7830a9f1f957ee511c31bb4ce9
## Description This dataset contains triples of the form "query1", "query2", "label" where labels are mapped as follows - similar: 1 - not similar: 0 - ambiguous: -1
neeva/query2query_evaluation
[ "task_categories:sentence-similarity", "region:us" ]
2022-09-22T20:43:54+00:00
{"task_categories": ["sentence-similarity"]}
2022-09-22T21:58:34+00:00
[]
[]
TAGS #task_categories-sentence-similarity #region-us
## Description This dataset contains triples of the form "query1", "query2", "label" where labels are mapped as follows - similar: 1 - not similar: 0 - ambiguous: -1
[ "## Description\n\nThis dataset contains triples of the form \"query1\", \"query2\", \"label\" where labels are mapped as follows\n- similar: 1\n- not similar: 0\n- ambiguous: -1" ]
[ "TAGS\n#task_categories-sentence-similarity #region-us \n", "## Description\n\nThis dataset contains triples of the form \"query1\", \"query2\", \"label\" where labels are mapped as follows\n- similar: 1\n- not similar: 0\n- ambiguous: -1" ]
69cb9d1035e5bbc34516d9dc016b50aa03e279c7
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Jorgeutd/sagemaker-roberta-base-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@neehau](https://huggingface.co/neehau) for evaluating this model.
autoevaluate/autoeval-eval-emotion-default-98e72c-1536755281
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T20:50:45+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Jorgeutd/sagemaker-roberta-base-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-22T20:51:27+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: Jorgeutd/sagemaker-roberta-base-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @neehau for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Jorgeutd/sagemaker-roberta-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @neehau for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Jorgeutd/sagemaker-roberta-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @neehau for evaluating this model." ]
70ade0819ad2c1f3b42f83e859a489b457f667e8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model.
autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-eb4ad9-22
[ "autotrain", "evaluation", "region:us" ]
2022-09-22T21:31:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-22T23:38:10+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Tristan for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Tristan for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Tristan for evaluating this model." ]
bc0e6e13bd30db81e45194b7e95ba06ea15c40f4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-66b-copy * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-Tristan__zero-shot-classification-large-test-Tristan__z-d81307-16956302
[ "autotrain", "evaluation", "region:us" ]
2022-09-23T17:13:34+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Tristan/zero-shot-classification-large-test"], "eval_info": {"task": "text_zero_shot_classification", "model": "Tristan/opt-66b-copy", "metrics": [], "dataset_name": "Tristan/zero-shot-classification-large-test", "dataset_config": "Tristan--zero-shot-classification-large-test", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-23T20:43:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: Tristan/opt-66b-copy * Dataset: Tristan/zero-shot-classification-large-test * Config: Tristan--zero-shot-classification-large-test * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-66b-copy\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Tristan/opt-66b-copy\n* Dataset: Tristan/zero-shot-classification-large-test\n* Config: Tristan--zero-shot-classification-large-test\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
36753cc241cc2951be69b6e230f3d7a028e5b066
# Dataset Card for "issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-source-metrics/issues
[ "region:us" ]
2022-09-23T17:41:08+00:00
{"dataset_info": {"features": [{"name": "dates", "dtype": "string"}, {"name": "type", "struct": [{"name": "authorAssociation", "dtype": "string"}, {"name": "comment", "dtype": "bool"}, {"name": "issue", "dtype": "bool"}]}], "splits": [{"name": "transformers", "num_bytes": 4712948, "num_examples": 133536}, {"name": "peft", "num_bytes": 228526, "num_examples": 6670}, {"name": "evaluate", "num_bytes": 63940, "num_examples": 1825}, {"name": "huggingface_hub", "num_bytes": 288140, "num_examples": 8274}, {"name": "accelerate", "num_bytes": 361197, "num_examples": 10324}, {"name": "datasets", "num_bytes": 821418, "num_examples": 23444}, {"name": "optimum", "num_bytes": 195473, "num_examples": 5630}, {"name": "pytorch_image_models", "num_bytes": 143735, "num_examples": 4167}, {"name": "gradio", "num_bytes": 1118865, "num_examples": 30797}, {"name": "tokenizers", "num_bytes": 195421, "num_examples": 5703}, {"name": "diffusers", "num_bytes": 1346732, "num_examples": 38439}, {"name": "safetensors", "num_bytes": 48986, "num_examples": 1418}, {"name": "candle", "num_bytes": 153795, "num_examples": 4054}, {"name": "text_generation_inference", "num_bytes": 204982, "num_examples": 6044}, {"name": "chat_ui", "num_bytes": 82128, "num_examples": 2360}, {"name": "hub_docs", "num_bytes": 137648, "num_examples": 3914}], "download_size": 3150086, "dataset_size": 10103934}, "configs": [{"config_name": "default", "data_files": [{"split": "peft", "path": "data/peft-*"}, {"split": "hub_docs", "path": "data/hub_docs-*"}, {"split": "evaluate", "path": "data/evaluate-*"}, {"split": "huggingface_hub", "path": "data/huggingface_hub-*"}, {"split": "accelerate", "path": "data/accelerate-*"}, {"split": "datasets", "path": "data/datasets-*"}, {"split": "optimum", "path": "data/optimum-*"}, {"split": "pytorch_image_models", "path": "data/pytorch_image_models-*"}, {"split": "gradio", "path": "data/gradio-*"}, {"split": "tokenizers", "path": "data/tokenizers-*"}, {"split": "diffusers", "path": "data/diffusers-*"}, {"split": "transformers", "path": "data/transformers-*"}, {"split": "safetensors", "path": "data/safetensors-*"}]}]}
2024-02-15T12:00:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for "issues" More Information needed
[ "# Dataset Card for \"issues\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"issues\"\n\nMore Information needed" ]
51e1265fc8118bc9273550c3ade7ee4e546e0bb9
# Dataset Card for WinoGAViL - [Dataset Description](#dataset-description) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-winogavil-evaluation-with-clip) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. - **Homepage:** https://winogavil.github.io/ - **Colab** https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi - **Repository:** https://github.com/WinoGAViL/WinoGAViL-experiments/ - **Paper:** https://arxiv.org/abs/2207.12576 - **Leaderboard:** https://winogavil.github.io/leaderboard - **Point of Contact:** [email protected]; [email protected] ### Supported Tasks and Leaderboards https://winogavil.github.io/leaderboard. https://paperswithcode.com/dataset/winogavil. ## Colab notebook code for Winogavil evaluation with CLIP https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi ### Languages English. ## Dataset Structure ### Data Fields candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates. cue (string): pogonophile - the generated cue. associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user. score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model. num_associations (int64): 3 - The number of images selected as associative with the cue. num_candidates (int64): 6 - the number of total candidates. solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance. solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance ID (int64): 367 - association ID. ### Data Splits There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set. There are different number of candidates, which creates different difficulty levels: -- With 5 candidates, random model expected score is 38%. -- With 6 candidates, random model expected score is 34%. -- With 10 candidates, random model expected score is 24%. -- With 12 candidates, random model expected score is 19%. <details> <summary>Why random chance for success with 5 candidates is 38%?</summary> It is a binomial distribution probability calculation. Assuming N=5 candidates, and K=2 associations, there could be three events: (1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0. (2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198. (3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1. * Together, when K=2, the expected score is 0+0.198+0.1 = 0.298. To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3. Same goes for (2) and (3). Now we can perform the same calculation with K=3 associations. Assuming N=5 candidates, and K=3 associations, there could be four events: (4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0. (5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06. (6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3. (7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1. * Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46. Taking the average of 0.298 and 0.46 we reach 0.379. Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6). </details> ## Dataset Creation Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. ### Annotations #### Annotation process We paid Amazon Mechanical Turk Workers to play our game. ## Considerations for Using the Data All associations were obtained with human annotators. ### Licensing Information CC-By 4.0 ### Citation Information @article{bitton2022winogavil, title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models}, author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy}, journal={arXiv preprint arXiv:2207.12576}, year={2022}
nlphuji/winogavil
[ "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "commonsense-reasoning", "visual-reasoning", "arxiv:2207.12576", "region:us" ]
2022-09-23T18:27:29+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_ids": [], "paperswithcode_id": "winogavil", "pretty_name": "WinoGAViL", "tags": ["commonsense-reasoning", "visual-reasoning"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."}
2022-11-26T19:56:27+00:00
[ "2207.12576" ]
[ "en" ]
TAGS #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #commonsense-reasoning #visual-reasoning #arxiv-2207.12576 #region-us
# Dataset Card for WinoGAViL - Dataset Description - Supported Tasks and Leaderboards - Colab notebook code for Winogavil evaluation with CLIP - Languages - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Considerations for Using the Data - Licensing Information - Citation Information ## Dataset Description WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. - Homepage: URL - Colab URL - Repository: URL - Paper: URL - Leaderboard: URL - Point of Contact: winogavil@URL; yonatanbitton1@URL ### Supported Tasks and Leaderboards URL URL ## Colab notebook code for Winogavil evaluation with CLIP URL ### Languages English. ## Dataset Structure ### Data Fields candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates. cue (string): pogonophile - the generated cue. associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user. score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model. num_associations (int64): 3 - The number of images selected as associative with the cue. num_candidates (int64): 6 - the number of total candidates. solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance. solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance ID (int64): 367 - association ID. ### Data Splits There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set. There are different number of candidates, which creates different difficulty levels: -- With 5 candidates, random model expected score is 38%. -- With 6 candidates, random model expected score is 34%. -- With 10 candidates, random model expected score is 24%. -- With 12 candidates, random model expected score is 19%. <details> <summary>Why random chance for success with 5 candidates is 38%?</summary> It is a binomial distribution probability calculation. Assuming N=5 candidates, and K=2 associations, there could be three events: (1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0. (2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198. (3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1. * Together, when K=2, the expected score is 0+0.198+0.1 = 0.298. To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3. Same goes for (2) and (3). Now we can perform the same calculation with K=3 associations. Assuming N=5 candidates, and K=3 associations, there could be four events: (4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0. (5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06. (6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3. (7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1. * Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46. Taking the average of 0.298 and 0.46 we reach 0.379. Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6). </details> ## Dataset Creation Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. ### Annotations #### Annotation process We paid Amazon Mechanical Turk Workers to play our game. ## Considerations for Using the Data All associations were obtained with human annotators. ### Licensing Information CC-By 4.0 @article{bitton2022winogavil, title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models}, author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy}, journal={arXiv preprint arXiv:2207.12576}, year={2022}
[ "# Dataset Card for WinoGAViL\n\n- Dataset Description\n - Supported Tasks and Leaderboards\n - Colab notebook code for Winogavil evaluation with CLIP\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n - Licensing Information\n - Citation Information", "## Dataset Description\n\nWinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. \n\n- Homepage: \nURL\n- Colab\nURL\n- Repository:\nURL\n- Paper:\nURL\n- Leaderboard:\nURL\n- Point of Contact:\nwinogavil@URL; yonatanbitton1@URL", "### Supported Tasks and Leaderboards\n\nURL \nURL", "## Colab notebook code for Winogavil evaluation with CLIP\nURL", "### Languages\n\nEnglish.", "## Dataset Structure", "### Data Fields\n\ncandidates (list): [\"bison\", \"shelter\", \"beard\", \"flea\", \"cattle\", \"shave\"] - list of image candidates. \ncue (string): pogonophile - the generated cue. \nassociations (string): [\"bison\", \"beard\", \"shave\"] - the images associated with the cue selected by the user. \nscore_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model. \nnum_associations (int64): 3 - The number of images selected as associative with the cue. \nnum_candidates (int64): 6 - the number of total candidates. \nsolvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance. \nsolvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance\nID (int64): 367 - association ID.", "### Data Splits\nThere is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.\nThere are different number of candidates, which creates different difficulty levels: \n -- With 5 candidates, random model expected score is 38%. \n -- With 6 candidates, random model expected score is 34%. \n -- With 10 candidates, random model expected score is 24%. \n -- With 12 candidates, random model expected score is 19%. \n\n<details>\n <summary>Why random chance for success with 5 candidates is 38%?</summary>\n \n It is a binomial distribution probability calculation. \n \n Assuming N=5 candidates, and K=2 associations, there could be three events: \n (1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0. \n (2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198. \n (3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1. \n * Together, when K=2, the expected score is 0+0.198+0.1 = 0.298. \n \n To calculate (1), the first guess needs to be wrong. There are 3 \"wrong\" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 \"wrong\" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3. \n Same goes for (2) and (3). \n \n Now we can perform the same calculation with K=3 associations. \n Assuming N=5 candidates, and K=3 associations, there could be four events: \n (4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0. \n (5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06. \n (6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3. \n (7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1. \n * Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46. \n \nTaking the average of 0.298 and 0.46 we reach 0.379. \n\nSame process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6). \n\n</details>", "## Dataset Creation\n\nInspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating\nassociations that are challenging for a rival AI model but still solvable by other\nhuman players.", "### Annotations", "#### Annotation process\n\nWe paid Amazon Mechanical Turk Workers to play our game.", "## Considerations for Using the Data\n\nAll associations were obtained with human annotators.", "### Licensing Information\n\nCC-By 4.0 \n\n\n\n @article{bitton2022winogavil,\n title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},\n author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},\n journal={arXiv preprint arXiv:2207.12576},\n year={2022}" ]
[ "TAGS\n#annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #commonsense-reasoning #visual-reasoning #arxiv-2207.12576 #region-us \n", "# Dataset Card for WinoGAViL\n\n- Dataset Description\n - Supported Tasks and Leaderboards\n - Colab notebook code for Winogavil evaluation with CLIP\n - Languages\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n - Licensing Information\n - Citation Information", "## Dataset Description\n\nWinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. \n\n- Homepage: \nURL\n- Colab\nURL\n- Repository:\nURL\n- Paper:\nURL\n- Leaderboard:\nURL\n- Point of Contact:\nwinogavil@URL; yonatanbitton1@URL", "### Supported Tasks and Leaderboards\n\nURL \nURL", "## Colab notebook code for Winogavil evaluation with CLIP\nURL", "### Languages\n\nEnglish.", "## Dataset Structure", "### Data Fields\n\ncandidates (list): [\"bison\", \"shelter\", \"beard\", \"flea\", \"cattle\", \"shave\"] - list of image candidates. \ncue (string): pogonophile - the generated cue. \nassociations (string): [\"bison\", \"beard\", \"shave\"] - the images associated with the cue selected by the user. \nscore_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model. \nnum_associations (int64): 3 - The number of images selected as associative with the cue. \nnum_candidates (int64): 6 - the number of total candidates. \nsolvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance. \nsolvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance\nID (int64): 367 - association ID.", "### Data Splits\nThere is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.\nThere are different number of candidates, which creates different difficulty levels: \n -- With 5 candidates, random model expected score is 38%. \n -- With 6 candidates, random model expected score is 34%. \n -- With 10 candidates, random model expected score is 24%. \n -- With 12 candidates, random model expected score is 19%. \n\n<details>\n <summary>Why random chance for success with 5 candidates is 38%?</summary>\n \n It is a binomial distribution probability calculation. \n \n Assuming N=5 candidates, and K=2 associations, there could be three events: \n (1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0. \n (2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198. \n (3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1. \n * Together, when K=2, the expected score is 0+0.198+0.1 = 0.298. \n \n To calculate (1), the first guess needs to be wrong. There are 3 \"wrong\" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 \"wrong\" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3. \n Same goes for (2) and (3). \n \n Now we can perform the same calculation with K=3 associations. \n Assuming N=5 candidates, and K=3 associations, there could be four events: \n (4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0. \n (5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06. \n (6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3. \n (7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1. \n * Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46. \n \nTaking the average of 0.298 and 0.46 we reach 0.379. \n\nSame process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6). \n\n</details>", "## Dataset Creation\n\nInspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating\nassociations that are challenging for a rival AI model but still solvable by other\nhuman players.", "### Annotations", "#### Annotation process\n\nWe paid Amazon Mechanical Turk Workers to play our game.", "## Considerations for Using the Data\n\nAll associations were obtained with human annotators.", "### Licensing Information\n\nCC-By 4.0 \n\n\n\n @article{bitton2022winogavil,\n title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},\n author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},\n journal={arXiv preprint arXiv:2207.12576},\n year={2022}" ]
c53614789f63256d057d584d40c10e2fc29212b1
This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset. The current splits are: `['100.unique', '100.repeat', '300.unique', '300.repeat', '1k.unique', '1k.repeat', '10k.unique', '10k.repeat']`. The `unique` ones ensure uniqueness across text entries. The `repeat` ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation. The default split is `100.unique`. The full process of this dataset creation is documented inside [cm4-synthetic-testing.py](./cm4-synthetic-testing.py).
HuggingFaceM4/cm4-synthetic-testing
[ "license:bigscience-openrail-m", "region:us" ]
2022-09-24T01:37:35+00:00
{"license": "bigscience-openrail-m"}
2022-11-22T16:24:24+00:00
[]
[]
TAGS #license-bigscience-openrail-m #region-us
This dataset is designed to be used in testing multimodal text/image models. It's derived from cm4-10k dataset. The current splits are: '['URL', 'URL', 'URL', 'URL', 'URL', 'URL', 'URL', 'URL']'. The 'unique' ones ensure uniqueness across text entries. The 'repeat' ones are repeating the same 10 unique records: - these are useful for memory leaks debugging as the records are always the same and thus remove the record variation from the equation. The default split is 'URL'. The full process of this dataset creation is documented inside URL.
[]
[ "TAGS\n#license-bigscience-openrail-m #region-us \n" ]
3ea47d49efd28082366bf993f3d2cac18e3c153d
# **Ariel Data Challenge NeurIPS 2022** Dataset is part of the [**Ariel Machine Learning Data Challenge**](https://www.ariel-datachallenge.space/). The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context. ### **Understanding worlds in our Milky Way** Today we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help! ### **Speed up model fitting!** Today, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation. ### **Many thanks to...** [NeurIPS 2022](https://nips.cc/) for hosting the data challenge and to the [UK Space Agency](https://www.gov.uk/government/organisations/uk-space-agency) and the [European Research Council](https://erc.europa.eu/) for support this effort. Also many thanks to the data challenge team and partnering institutes, and of course thanks to the [Ariel](https://arielmission.space/) team for technical support and building the space mission in the first place! For more information, contact us at: exoai.ucl [at] gmail.com
n1ghtf4l1/Ariel-Data-Challenge-NeurIPS-2022
[ "license:mit", "region:us" ]
2022-09-24T04:33:24+00:00
{"license": "mit"}
2022-09-24T04:55:23+00:00
[]
[]
TAGS #license-mit #region-us
# Ariel Data Challenge NeurIPS 2022 Dataset is part of the Ariel Machine Learning Data Challenge. The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context. ### Understanding worlds in our Milky Way Today we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help! ### Speed up model fitting! Today, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation. ### Many thanks to... NeurIPS 2022 for hosting the data challenge and to the UK Space Agency and the European Research Council for support this effort. Also many thanks to the data challenge team and partnering institutes, and of course thanks to the Ariel team for technical support and building the space mission in the first place! For more information, contact us at: URL [at] URL
[ "# Ariel Data Challenge NeurIPS 2022\n\nDataset is part of the Ariel Machine Learning Data Challenge. The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context.", "### Understanding worlds in our Milky Way\n\nToday we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help!", "### Speed up model fitting!\n\nToday, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation.", "### Many thanks to...\n\nNeurIPS 2022 for hosting the data challenge and to the UK Space Agency and the European Research Council for support this effort. Also many thanks to the data challenge team and partnering institutes, and of course thanks to the Ariel team for technical support and building the space mission in the first place!\n\nFor more information, contact us at: URL [at] URL" ]
[ "TAGS\n#license-mit #region-us \n", "# Ariel Data Challenge NeurIPS 2022\n\nDataset is part of the Ariel Machine Learning Data Challenge. The Ariel Space mission is a European Space Agency mission to be launched in 2029. Ariel will observe the atmospheres of 1000 extrasolar planets - planets around other stars - to determine how they are made, how they evolve and how to put our own Solar System in the gallactic context.", "### Understanding worlds in our Milky Way\n\nToday we know of roughly 5000 exoplanets in our Milky Way galaxy. Given that the first planet was only conclusively discovered in the mid-1990's, this is an impressive achievement. Yet, simple number counting does not tell us much about the nature of these worlds. One of the best ways to understand their formation and evolution histories is to understand the composition of their atmospheres. What's the chemistry, temperatures, cloud coverage, etc? Can we see signs of possible bio-markers in the smaller Earth and super-Earth planets? Since we can't get in-situ measurements (even the closest exoplanet is lightyears away), we rely on remote sensing and interpreting the stellar light that shines through the atmosphere of these planets. Model fitting these atmospheric exoplanet spectra is tricky and requires significant computational time. This is where you can help!", "### Speed up model fitting!\n\nToday, our atmospheric models are fit to the data using MCMC type approaches. This is sufficient if your atmospheric forward models are fast to run but convergence becomes problematic if this is not the case. This challenge looks at inverse modelling using machine learning. For more information on why we need your help, we provide more background in the about page and the documentation.", "### Many thanks to...\n\nNeurIPS 2022 for hosting the data challenge and to the UK Space Agency and the European Research Council for support this effort. Also many thanks to the data challenge team and partnering institutes, and of course thanks to the Ariel team for technical support and building the space mission in the first place!\n\nFor more information, contact us at: URL [at] URL" ]
618847c234ccbaafd4238ac3113da2c20b0ef758
This is a collection of embeddings that I decided to make public. Additionally, it will be where I host any future embeddings I decide to train.
BumblingOrange/Hanks_Embeddings
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
2022-09-24T05:01:41+00:00
{"license": "bigscience-bloom-rail-1.0"}
2022-09-24T19:32:38+00:00
[]
[]
TAGS #license-bigscience-bloom-rail-1.0 #region-us
This is a collection of embeddings that I decided to make public. Additionally, it will be where I host any future embeddings I decide to train.
[]
[ "TAGS\n#license-bigscience-bloom-rail-1.0 #region-us \n" ]
75b8d3472af2587f51d9f635e078372d308b344a
# Dataset Card for pokemon-icons ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Pokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield. ### Supported Tasks and Leaderboards Image classification
zishuod/pokemon-icons
[ "task_categories:image-classification", "license:mit", "pokemon", "region:us" ]
2022-09-24T14:12:08+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": ["mit"], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": [], "pretty_name": "pokemon-icons", "tags": ["pokemon"]}
2022-09-24T14:35:39+00:00
[]
[]
TAGS #task_categories-image-classification #license-mit #pokemon #region-us
# Dataset Card for pokemon-icons ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Pokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield. ### Supported Tasks and Leaderboards Image classification
[ "# Dataset Card for pokemon-icons", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nPokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield.", "### Supported Tasks and Leaderboards\n\nImage classification" ]
[ "TAGS\n#task_categories-image-classification #license-mit #pokemon #region-us \n", "# Dataset Card for pokemon-icons", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nPokemon Icons. Most of them are collected and cropped from screenshots captured in Pokémon Sword and Shield.", "### Supported Tasks and Leaderboards\n\nImage classification" ]
8f854e3e4f7007134410f2040827bba7bf4c3dd8
Bundesliga Videos dataset from Kaggle competition: https://www.kaggle.com/competitions/dfl-bundesliga-data-shootout
dbal0503/Bundesliga
[ "region:us" ]
2022-09-24T17:04:15+00:00
{}
2022-09-26T16:48:50+00:00
[]
[]
TAGS #region-us
Bundesliga Videos dataset from Kaggle competition: URL
[]
[ "TAGS\n#region-us \n" ]
e3e2a63ffff66b9a9735524551e3818e96af03ee
https://github.com/karolpiczak/ESC-50 The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license. K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015. [DOI: http://dx.doi.org/10.1145/2733373.2806390]
ashraq/esc50
[ "region:us" ]
2022-09-24T18:51:49+00:00
{}
2023-01-07T08:35:28+00:00
[]
[]
TAGS #region-us
URL The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license. K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015. [DOI: URL
[]
[ "TAGS\n#region-us \n" ]
afd9400721e19e44f4d28598cb73902558f02bbb
We partition the earnings22 dataset at https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram by source_id: Validation: 4420696 4448760 4461799 4469836 4473238 4482110 Test: 4432298 4450488 4470290 4479741 4483338 4485244 Train: remainder Official script for processing these splits will be released shortly.
sanchit-gandhi/earnings22_split_resampled
[ "region:us" ]
2022-09-24T19:26:46+00:00
{}
2022-09-30T14:24:09+00:00
[]
[]
TAGS #region-us
We partition the earnings22 dataset at URL by source_id: Validation: 4420696 4448760 4461799 4469836 4473238 4482110 Test: 4432298 4450488 4470290 4479741 4483338 4485244 Train: remainder Official script for processing these splits will be released shortly.
[]
[ "TAGS\n#region-us \n" ]
505bb434cc751d0b5158ae82f368a7c63e7a94c6
# Dataset Card for Nouns auto-captioned _Dataset used to train Nouns text to image model_ Automatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated! For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided. ## Citation If you use this dataset, please cite it as: ``` @misc{piedrafita2022nouns, author = {Piedrafita, Miguel}, title = {Nouns auto-captioned}, year={2022}, howpublished= {\url{https://huggingface.co/datasets/m1guelpf/nouns/}} } ```
m1guelpf/nouns
[ "task_categories:text-to-image", "annotations_creators:machine-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc0-1.0", "region:us" ]
2022-09-25T02:30:09+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Nouns auto-captioned", "tags": []}
2022-09-25T05:18:40+00:00
[]
[ "en" ]
TAGS #task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc0-1.0 #region-us
# Dataset Card for Nouns auto-captioned _Dataset used to train Nouns text to image model_ Automatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated! For each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided. If you use this dataset, please cite it as:
[ "# Dataset Card for Nouns auto-captioned\n\n_Dataset used to train Nouns text to image model_\n\nAutomatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated!\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.\n\n\nIf you use this dataset, please cite it as:" ]
[ "TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc0-1.0 #region-us \n", "# Dataset Card for Nouns auto-captioned\n\n_Dataset used to train Nouns text to image model_\n\nAutomatically generated captions for Nouns from their attributes, colors and items. Help on the captioning script appreciated!\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL jpeg, and 'text' is the accompanying text caption. Only a train split is provided.\n\n\nIf you use this dataset, please cite it as:" ]
bbfa20fac8083c90012bca77e55acd8aa4d5c824
# Info >Try to include embedding info in the commit description (model, author, artist, images, etc) >Naming: name-object/style
waifu-research-department/embeddings
[ "license:mit", "region:us" ]
2022-09-25T05:13:59+00:00
{"license": "mit"}
2022-09-29T01:50:05+00:00
[]
[]
TAGS #license-mit #region-us
# Info >Try to include embedding info in the commit description (model, author, artist, images, etc) >Naming: name-object/style
[ "# Info\n>Try to include embedding info in the commit description (model, author, artist, images, etc)\n\n>Naming: name-object/style" ]
[ "TAGS\n#license-mit #region-us \n", "# Info\n>Try to include embedding info in the commit description (model, author, artist, images, etc)\n\n>Naming: name-object/style" ]
9c9b738f010f33843d0bc076f1024d3ca7191fb4
# Dataset Card for "Text" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Miron/NLP_1
[ "region:us" ]
2022-09-25T14:43:59+00:00
{"dataset_info": {"features": [{"name": "Science artilce's texts", "dtype": "string"}, {"name": "text_length", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54709956.09102402, "num_examples": 711}, {"name": "validation", "num_bytes": 6155831.908975979, "num_examples": 80}], "download_size": 26356400, "dataset_size": 60865788.0}}
2022-11-10T08:00:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Text" More Information needed
[ "# Dataset Card for \"Text\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Text\"\n\nMore Information needed" ]
21f3313de37d60d45fb67a276d63ace9c4a0ac7d
# Dataset Card for MedNLI ## Dataset Description - **Homepage:** https://physionet.org/content/mednli/1.0.0/ - **Pubmed:** False - **Public:** False - **Tasks:** TE State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. However, they still lack generalization capabilities in conditions that differ from the ones encountered during training. This is even more challenging in specialized, and knowledge intensive domains, where training data is limited. To address this gap, we introduce MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI), grounded in the medical history of patients. As the source of premise sentences, we used the MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical History to be the most informative section of a clinical note, from which useful inferences can be drawn about the patient. ## Citation Information ``` @misc{https://doi.org/10.13026/c2rs98, title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain}, author = {Shivade, Chaitanya}, year = 2017, publisher = {physionet.org}, doi = {10.13026/C2RS98}, url = {https://physionet.org/content/mednli/} } ```
bigbio/mednli
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2022-09-26T02:08:16+00:00
{"language": ["en"], "license": "other", "multilinguality": "monolingual", "paperswithcode_id": "mednli", "pretty_name": "MedNLI", "bigbio_language": ["English"], "bigbio_license_short_name": "PHYSIONET_LICENSE_1p5", "homepage": "https://physionet.org/content/mednli/1.0.0/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXTUAL_ENTAILMENT"]}
2022-12-22T15:24:43+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for MedNLI ## Dataset Description - Homepage: URL - Pubmed: False - Public: False - Tasks: TE State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. However, they still lack generalization capabilities in conditions that differ from the ones encountered during training. This is even more challenging in specialized, and knowledge intensive domains, where training data is limited. To address this gap, we introduce MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI), grounded in the medical history of patients. As the source of premise sentences, we used the MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical History to be the most informative section of a clinical note, from which useful inferences can be drawn about the patient.
[ "# Dataset Card for MedNLI", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TE\n\n\nState of the art models using deep neural networks have become very good in learning an accurate\nmapping from inputs to outputs. However, they still lack generalization capabilities in conditions\nthat differ from the ones encountered during training. This is even more challenging in specialized,\nand knowledge intensive domains, where training data is limited. To address this gap, we introduce\nMedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),\ngrounded in the medical history of patients. As the source of premise sentences, we used the\nMIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical\nnotes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical\nHistory to be the most informative section of a clinical note, from which useful inferences can be\ndrawn about the patient." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for MedNLI", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TE\n\n\nState of the art models using deep neural networks have become very good in learning an accurate\nmapping from inputs to outputs. However, they still lack generalization capabilities in conditions\nthat differ from the ones encountered during training. This is even more challenging in specialized,\nand knowledge intensive domains, where training data is limited. To address this gap, we introduce\nMedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),\ngrounded in the medical history of patients. As the source of premise sentences, we used the\nMIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical\nnotes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical\nHistory to be the most informative section of a clinical note, from which useful inferences can be\ndrawn about the patient." ]