Dataset Viewer
Auto-converted to Parquet
url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.21B
node_id
stringlengths
18
32
number
int64
1
4.21k
title
stringlengths
1
276
user
dict
labels
sequence
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,651B
updated_at
int64
1,587B
1,651B
closed_at
int64
1,587B
1,651B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4206/comments
https://api.github.com/repos/huggingface/datasets/issues/4206/events
https://github.com/huggingface/datasets/pull/4206
1,212,715,581
PR_kwDODunzps42pJQW
4,206
Add Nerval Metric
{ "login": "mdadda", "id": 49372461, "node_id": "MDQ6VXNlcjQ5MzcyNDYx", "avatar_url": "https://avatars.githubusercontent.com/u/49372461?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mdadda", "html_url": "https://github.com/mdadda", "followers_url": "https://api.github.com/users/mdadda/followers", "following_url": "https://api.github.com/users/mdadda/following{/other_user}", "gists_url": "https://api.github.com/users/mdadda/gists{/gist_id}", "starred_url": "https://api.github.com/users/mdadda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdadda/subscriptions", "organizations_url": "https://api.github.com/users/mdadda/orgs", "repos_url": "https://api.github.com/users/mdadda/repos", "events_url": "https://api.github.com/users/mdadda/events{/privacy}", "received_events_url": "https://api.github.com/users/mdadda/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,650,656,700,000
1,650,708,329,000
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4206", "html_url": "https://github.com/huggingface/datasets/pull/4206", "diff_url": "https://github.com/huggingface/datasets/pull/4206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4206.patch", "merged_at": null }
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4206/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4205/comments
https://api.github.com/repos/huggingface/datasets/issues/4205/events
https://github.com/huggingface/datasets/pull/4205
1,212,466,138
PR_kwDODunzps42oVFE
4,205
Fix `convert_file_size_to_int` for kilobits and megabits
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4205). All of your documentation changes will be reflected on that endpoint." ]
1,650,639,381,000
1,650,640,052,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4205", "html_url": "https://github.com/huggingface/datasets/pull/4205", "diff_url": "https://github.com/huggingface/datasets/pull/4205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4205.patch", "merged_at": null }
Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4205/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4204/comments
https://api.github.com/repos/huggingface/datasets/issues/4204/events
https://github.com/huggingface/datasets/pull/4204
1,212,431,764
PR_kwDODunzps42oN0j
4,204
Add Recall Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4204). All of your documentation changes will be reflected on that endpoint." ]
1,650,637,466,000
1,650,638,399,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4204", "html_url": "https://github.com/huggingface/datasets/pull/4204", "diff_url": "https://github.com/huggingface/datasets/pull/4204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4204.patch", "merged_at": null }
What this PR mainly does: - add metric card for recall metric - update docs in recall python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4204/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4203/comments
https://api.github.com/repos/huggingface/datasets/issues/4203/events
https://github.com/huggingface/datasets/pull/4203
1,212,431,067
PR_kwDODunzps42oNrS
4,203
Add Precision Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4203). All of your documentation changes will be reflected on that endpoint." ]
1,650,637,428,000
1,650,638,890,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4203", "html_url": "https://github.com/huggingface/datasets/pull/4203", "diff_url": "https://github.com/huggingface/datasets/pull/4203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4203.patch", "merged_at": null }
What this PR mainly does: - add metric card for precision metric - update docs in precision python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted metric card, as well as the docs to go in the .py file. I figured I'd upload the .json because it could be useful, especially if I also make a PR with the script I'm using (let me know if that's something you think would be beneficial!)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4203/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4202/comments
https://api.github.com/repos/huggingface/datasets/issues/4202/events
https://github.com/huggingface/datasets/pull/4202
1,212,326,288
PR_kwDODunzps42n278
4,202
Fix some type annotation in doc
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650,632,011,000
1,650,639,780,000
1,650,639,403,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4202", "html_url": "https://github.com/huggingface/datasets/pull/4202", "diff_url": "https://github.com/huggingface/datasets/pull/4202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4202.patch", "merged_at": 1650639403000 }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4202/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4201/comments
https://api.github.com/repos/huggingface/datasets/issues/4201/events
https://github.com/huggingface/datasets/pull/4201
1,212,086,420
PR_kwDODunzps42nIRm
4,201
Update GH template for dataset viewer issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4201). All of your documentation changes will be reflected on that endpoint.", "You can see rendering at: https://github.com/huggingface/datasets/blob/6b48fedbdafe12a42c7b6edcecc32820af1a4822/.github/ISSUE_TEMPLATE/dataset-viewer.yml" ]
1,650,620,084,000
1,650,624,864,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4201", "html_url": "https://github.com/huggingface/datasets/pull/4201", "diff_url": "https://github.com/huggingface/datasets/pull/4201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4201.patch", "merged_at": null }
Update template to use new issue forms instead. With this PR we can check if this new feature is useful for us. Once validated, we can update the other templates. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4201/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4200/comments
https://api.github.com/repos/huggingface/datasets/issues/4200/events
https://github.com/huggingface/datasets/pull/4200
1,211,980,110
PR_kwDODunzps42mz0w
4,200
Add to docs how to load from local script
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650,614,905,000
1,650,693,228,000
1,650,692,845,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4200", "html_url": "https://github.com/huggingface/datasets/pull/4200", "diff_url": "https://github.com/huggingface/datasets/pull/4200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4200.patch", "merged_at": 1650692844000 }
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4200/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4200/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4199/comments
https://api.github.com/repos/huggingface/datasets/issues/4199/events
https://github.com/huggingface/datasets/issues/4199
1,211,953,308
I_kwDODunzps5IPPCc
4,199
Cache miss during reload for datasets using image fetch utilities through map
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache", "Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": get_datasets_user_agent()},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nwith \r\n```python\r\nUSER_AGENT = get_datasets_user_agent()\r\n\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\r\n for _ in range(retries + 1):\r\n try:\r\n request = urllib.request.Request(\r\n image_url,\r\n data=None,\r\n headers={\"user-agent\": USER_AGENT},\r\n )\r\n with urllib.request.urlopen(request, timeout=timeout) as req:\r\n image = PIL.Image.open(io.BytesIO(req.read()))\r\n break\r\n except Exception:\r\n image = None\r\n return image\r\n```\r\nfixes the issue?", "Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to those. I saw that `http_get` does exist." ]
1,650,613,628,000
1,650,648,262,000
null
MEMBER
null
null
null
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ## Steps to reproduce the bug Using the example provided in `red_caps` dataset. ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": get_datasets_user_agent()}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "jellyfish") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 5 dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads}) ``` Run this in an interpretor or as a script twice and see that the cache is missed the second time. ## Expected results At reload there should not be any cache miss ## Actual results Every time script is run, cache is missed and dataset is built from scratch. ## Environment info - `datasets` version: 2.1.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4199/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4198/comments
https://api.github.com/repos/huggingface/datasets/issues/4198/events
https://github.com/huggingface/datasets/issues/4198
1,211,456,559
I_kwDODunzps5INVwv
4,198
There is no dataset
{ "login": "wilfoderek", "id": 1625647, "node_id": "MDQ6VXNlcjE2MjU2NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/1625647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wilfoderek", "html_url": "https://github.com/wilfoderek", "followers_url": "https://api.github.com/users/wilfoderek/followers", "following_url": "https://api.github.com/users/wilfoderek/following{/other_user}", "gists_url": "https://api.github.com/users/wilfoderek/gists{/gist_id}", "starred_url": "https://api.github.com/users/wilfoderek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilfoderek/subscriptions", "organizations_url": "https://api.github.com/users/wilfoderek/orgs", "repos_url": "https://api.github.com/users/wilfoderek/repos", "events_url": "https://api.github.com/users/wilfoderek/events{/privacy}", "received_events_url": "https://api.github.com/users/wilfoderek/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[]
1,650,568,766,000
1,650,607,945,000
1,650,607,945,000
NONE
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4198/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4197/comments
https://api.github.com/repos/huggingface/datasets/issues/4197/events
https://github.com/huggingface/datasets/pull/4197
1,211,342,558
PR_kwDODunzps42kyXD
4,197
Add remove_columns=True
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps://github.com/huggingface/datasets/blob/bf432011ff9155a5bc16c03956bc63e514baf80d/src/datasets/arrow_dataset.py#L2232.\r\n\r\n(in the `batched` case, we can also copy the inputs' values (list objects) to ignore in-place modifications to the inputs' columns)\r\n\r\nI think `remove_columns=True` has no meaning, so I'm not a fan of this change.", "@mariosasko copy does have a cost associated with it ... and plus you'll have to consider `deepcopy` Imagine columnds that are list of list of list of list .... Though I have to agree that `remove_columns=True` doesn't make sense (but, IMO, neither does it in its current use-case as it should refer to `input_columns`) ", "Okay closing this PR for the following reasons:\r\n - `remove_columns=True` was expected to keep the `.update`-like operator for `.map`. I initially thought it would be a good way to ignore function side effects and only keep output of that function (cf. PR description).\r\n - expected `remove_columns=True` is a bad API according to @mariosasko and introduces unecessary changes for little gain (strictly equivalent to `remove_columns=dset.column_names`)" ]
1,650,562,093,000
1,650,639,101,000
1,650,638,730,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4197", "html_url": "https://github.com/huggingface/datasets/pull/4197", "diff_url": "https://github.com/huggingface/datasets/pull/4197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4197.patch", "merged_at": null }
This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like: ``` def apply(batch): batch_size = len(batch["id"]) batch["text"] = ["potato" for _ range(batch_size)] return {} # Columns are: {"id": int} dset.map(apply, batched=True, remove_columns="text") # crashes because `text` is not in the original columns dset.map(apply, batched=True) # mapped datasets has `text` column ``` In this PR we suggest to have `remove_columns=True` so that we ignore the input completely, and just use the output to generate mapped dataset. This means that inplace operations won't have any effects anymore.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4197/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4196/comments
https://api.github.com/repos/huggingface/datasets/issues/4196/events
https://github.com/huggingface/datasets/issues/4196
1,211,271,261
I_kwDODunzps5IMohd
4,196
Embed image and audio files in `save_to_disk`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,650,558,318,000
1,650,558,318,000
null
MEMBER
null
null
null
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice: - the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else - users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset - consistency with push_to_hub This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files. cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4196/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4196/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4194/comments
https://api.github.com/repos/huggingface/datasets/issues/4194/events
https://github.com/huggingface/datasets/pull/4194
1,210,958,602
PR_kwDODunzps42jjD3
4,194
Support lists of multi-dimensional numpy arrays
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4194). All of your documentation changes will be reflected on that endpoint." ]
1,650,543,746,000
1,650,693,116,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4194", "html_url": "https://github.com/huggingface/datasets/pull/4194", "diff_url": "https://github.com/huggingface/datasets/pull/4194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4194.patch", "merged_at": null }
Fix #4191. CC: @SaulLu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4194/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4193/comments
https://api.github.com/repos/huggingface/datasets/issues/4193/events
https://github.com/huggingface/datasets/pull/4193
1,210,734,701
PR_kwDODunzps42izQG
4,193
Document save_to_disk and push_to_hub on images and audio files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Good catch, I updated the docstrings" ]
1,650,531,876,000
1,650,621,355,000
1,650,620,971,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4193", "html_url": "https://github.com/huggingface/datasets/pull/4193", "diff_url": "https://github.com/huggingface/datasets/pull/4193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4193.patch", "merged_at": 1650620971000 }
Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4193/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4192/comments
https://api.github.com/repos/huggingface/datasets/issues/4192/events
https://github.com/huggingface/datasets/issues/4192
1,210,692,554
I_kwDODunzps5IKbPK
4,192
load_dataset can't load local dataset,Unable to find ...
{ "login": "ahf876828330", "id": 33253979, "node_id": "MDQ6VXNlcjMzMjUzOTc5", "avatar_url": "https://avatars.githubusercontent.com/u/33253979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahf876828330", "html_url": "https://github.com/ahf876828330", "followers_url": "https://api.github.com/users/ahf876828330/followers", "following_url": "https://api.github.com/users/ahf876828330/following{/other_user}", "gists_url": "https://api.github.com/users/ahf876828330/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahf876828330/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahf876828330/subscriptions", "organizations_url": "https://api.github.com/users/ahf876828330/orgs", "repos_url": "https://api.github.com/users/ahf876828330/repos", "events_url": "https://api.github.com/users/ahf876828330/events{/privacy}", "received_events_url": "https://api.github.com/users/ahf876828330/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?", "Hi @ahf876828330, \r\n\r\nAs @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.\r\n\r\nIn your case, as the dataset script is local, you should better point to your local loading script:\r\n```python\r\ndataset = load_dataset(\"dataset/opus_books.py\")\r\n```\r\n\r\nPlease, feel free to re-open this issue if the previous code snippet does not work for you.", "> Hi! :)\r\n> \r\n> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't just metadata please?\r\n\r\nYes,you are right!So if I have a metadata dataset local,How can I turn it to a dataset that can be used by the load_dataset() function?Are there some examples?" ]
1,650,529,738,000
1,650,704,189,000
1,650,613,193,000
NONE
null
null
null
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwargs, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder data_files=data_files, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory download_mode=download_mode, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained ![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png) ![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png) the code is in the model.py,why I can't use the load_dataset function to load my local dataset?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4192/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4191/comments
https://api.github.com/repos/huggingface/datasets/issues/4191/events
https://github.com/huggingface/datasets/issues/4191
1,210,028,090
I_kwDODunzps5IH5A6
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `data_map` are arrays of dimension 2\r\n - the outer list in `prepare_dataset_2D` should not be taken into account in the dimension counting, as it is used because in `map` you pass `batched=True`\r\n\r\nNote that for the 3D alternatives you mention:\r\n- In `prepare_dataset_3D_ter`, you create an `Array3D` from arrays of dimension 3:\r\n - the array `data_map[index][np.newaxis, :, :]` has dimension 3\r\n - the outer list in `prepare_dataset_3D_ter` is the one used by `batched=True`\r\n- In `prepare_dataset_3D_bis`, you create an `Array3D` from a list of list of lists:\r\n - the value of `data_map[index].tolist()` is a list of lists\r\n - it is enclosed by another list `[data_map[index].tolist()]`, thus giving a list of list of lists\r\n - the outer list is the one used by `batched=True`\r\n\r\nTherefore, if I understand correctly, your request would be to be able to create an `Array3D` from a list of an array of dimension 2:\r\n- In `prepare_dataset_3D`, `data_map[index]` is an array of dimension 2\r\n- it is enclosed by a list `[data_map[index]]`, thus giving a list of an array of dimension 2\r\n- the outer list is the one used by `batched=True`\r\n\r\nPlease, feel free to tell me if I did not understand you correctly.", "Hi @albertvillanova ,\r\n\r\nIndeed my message was confusing and you guessed right :smile: : I think would be interesting to be able to create an Array3D from a list of an array of dimension 2. \r\n\r\nFor the 2D case I should have given as a \"similar\" example:\r\n```python\r\n\r\ndata_map_1D = {\r\n 1: np.array([0.2, 0.4]),\r\n 2: np.array([0.1, 0.4]),\r\n}\r\n\r\ndef prepare_dataset_2D(batch):\r\n batch[\"pixel_values\"] = [[data_map_1D[index]] for index in batch[\"id\"]]\r\n return batch\r\n \r\nds_2D = ds.map(\r\n prepare_dataset_2D, \r\n batched=True, \r\n remove_columns=ds.column_names, \r\n features=features.Features({\"pixel_values\": features.Array2D(shape=(1, 2), dtype=\"float32\")})\r\n)\r\n```" ]
1,650,477,872,000
1,650,543,737,000
null
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the following toy dataset t: ```python import numpy as np from datasets import Dataset, features data_map = { 1: np.array([[0.2, 0,4],[0.19, 0,3]]), 2: np.array([[0.1, 0,4],[0.19, 0,3]]), } def create_toy_ds(): my_dict = {"id":[1, 2]} return Dataset.from_dict(my_dict) ds = create_toy_ds() ``` The following 2D processing works without any errors raised: ```python def prepare_dataset_2D(batch): batch["pixel_values"] = [data_map[index] for index in batch["id"]] return batch ds_2D = ds.map( prepare_dataset_2D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")}) ) ``` The following 3D processing doesn't work: ```python def prepare_dataset_3D(batch): batch["pixel_values"] = [[data_map[index]] for index in batch["id"]] return batch ds_3D = ds.map( prepare_dataset_3D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")}) ) ``` The error raised is: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) [<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>() 3 batched=True, 4 remove_columns=ds.column_names, ----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) 6 ) 12 frames [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1971 new_fingerprint=new_fingerprint, 1972 disable_tqdm=disable_tqdm, -> 1973 desc=desc, 1974 ) 1975 else: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 518 self: "Dataset" = kwargs.pop("self") 519 # apply actual function --> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 522 for dataset in datasets: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 485 } 486 # apply actual function --> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 489 # re-apply format to the output [/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 456 # Call actual function 457 --> 458 out = func(self, *args, **kwargs) 459 460 # Update fingerprint of in-place transforms + update in-place history of transforms [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2354 writer.write_table(batch) 2355 else: -> 2356 writer.write_batch(batch) 2357 if update_data and writer is not None: 2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size) 505 col_try_type = try_features[col] if try_features is not None and col in try_features else None 506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 507 arrays.append(pa.array(typed_sequence)) 508 inferred_features[col] = typed_sequence.get_inferred_type() 509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type) 175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type) 176 else: --> 177 storage = pa.array(data, pa_type.storage_dtype) 178 return pa.ExtensionArray.from_storage(pa_type, storage) 179 /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` **Describe the solution you'd like** No error in the second scenario and an identical result to the following snippets. **Describe alternatives you've considered** There are other alternatives that work such as: ```python def prepare_dataset_3D_bis(batch): batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]] return batch ds_3D_bis = ds.map( prepare_dataset_3D_bis, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` or ```python def prepare_dataset_3D_ter(batch): batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]] return batch ds_3D_ter = ds.map( prepare_dataset_3D_ter, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` But both solutions require the user to be aware that `data_map[index]` is an `np.array` type. cc @lhoestq as we discuss this offline :smile:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4191/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4191/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4190/comments
https://api.github.com/repos/huggingface/datasets/issues/4190/events
https://github.com/huggingface/datasets/pull/4190
1,209,901,677
PR_kwDODunzps42gK3y
4,190
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650,470,881,000
1,650,635,905,000
1,650,635,520,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4190", "html_url": "https://github.com/huggingface/datasets/pull/4190", "diff_url": "https://github.com/huggingface/datasets/pull/4190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4190.patch", "merged_at": 1650635520000 }
This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/src/transformers/modeling_utils.py#L1350).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4190/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4189/comments
https://api.github.com/repos/huggingface/datasets/issues/4189/events
https://github.com/huggingface/datasets/pull/4189
1,209,881,351
PR_kwDODunzps42gGv5
4,189
Document how to use FAISS index for special operations
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4189). All of your documentation changes will be reflected on that endpoint." ]
1,650,469,916,000
1,650,518,215,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4189", "html_url": "https://github.com/huggingface/datasets/pull/4189", "diff_url": "https://github.com/huggingface/datasets/pull/4189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4189.patch", "merged_at": null }
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4189/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4188/comments
https://api.github.com/repos/huggingface/datasets/issues/4188/events
https://github.com/huggingface/datasets/pull/4188
1,209,740,957
PR_kwDODunzps42fpMv
4,188
Support streaming cnn_dailymail dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650,463,476,000
1,650,470,340,000
1,650,469,969,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4188", "html_url": "https://github.com/huggingface/datasets/pull/4188", "diff_url": "https://github.com/huggingface/datasets/pull/4188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4188.patch", "merged_at": 1650469969000 }
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4188/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4187/comments
https://api.github.com/repos/huggingface/datasets/issues/4187/events
https://github.com/huggingface/datasets/pull/4187
1,209,721,532
PR_kwDODunzps42flGp
4,187
Don't duplicate data when encoding audio or image
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for audio\" - why would that be?\r\n\r\nThe `path` would always point to some location in the `cache_dir`? I think this can be problematic. I would have expected that after I did `dataset.save_to_disk(...)` that I can remove the cache dir. But maybe just because I'm not familiar with HF. Or maybe the docs can be improved to clarify this.\r\n", "We could always load every data file into `bytes` and save it this way the audio as bytes in `arrow` format, but the problem then would be that it makes the `file` column useless, *i.e.* people cannot inspect the audio file locally anymore or else they would need to first save bytes as a file which is not evident. This either breaks backwards compatibility or forces the user to stored 2x the required size locally. There was a longer discussion here: https://github.com/huggingface/datasets/issues/3663\r\n\r\nIt's a good argument though that `dataset.save_to_disk(...)` should save everything that is needed to the disk and should be independent of other folders, but I do think the arguments of #3663 to not break backwards compatibility and to allow people to inspect the downloaded audio files locally are a bit more important here. \r\n\r\nBut maybe, we could add a flag, `save_files_as_bytes` or `make_independent`, `make_self_contained` or a better name to `save_to_disk(...)` and `push_to_hub(...)` that would allow to make the resulting folder completely independent. ", "What do you think @mariosasko @lhoestq @polinaeterna @anton-l ?\r\n", "For context: you can either store the path to local images or audio files, or the bytes of those files.\r\n\r\nIf your images and audio files are local files, then the arrow file from `save_to_disk` will store paths to these files.\r\nIf you want to include the bytes or your images or audio files instead, you must `read()` those files first.\r\nThis can be done by storing the \"bytes\" instead of the \"path\" of the images or audio files.\r\n\r\nOn the other hand, the resulting Parquet files from `push_to_hub` are self-contained, so that anyone can reload the dataset from the Hub. If your dataset contains image or audio data, the Parquet files will store the bytes of your images or audio files.\r\n\r\nFor now I just updated the documentation: https://github.com/huggingface/datasets/pull/4193. Maybe we can also embed the image and audio bytes in `save_to_disk` when we implement sharding, so that is can be done as efficiently as `push_to_hub`.\r\n\r\nAnyway, merging this one :)" ]
1,650,462,637,000
1,650,532,620,000
1,650,532,247,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4187", "html_url": "https://github.com/huggingface/datasets/pull/4187", "diff_url": "https://github.com/huggingface/datasets/pull/4187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4187.patch", "merged_at": 1650532247000 }
Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`. This PR discards the `bytes` when the audio or image file exists locally. In particular it's common for audio datasets builders to provide both the bytes and the local path in order to work for both streaming (using the bytes) and non-streaming mode (using a local file - which is often required for audio). cc @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4187/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4186/comments
https://api.github.com/repos/huggingface/datasets/issues/4186/events
https://github.com/huggingface/datasets/pull/4186
1,209,463,599
PR_kwDODunzps42evF5
4,186
Fix outdated docstring about default dataset config
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650,449,091,000
1,650,632,084,000
1,650,631,711,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4186", "html_url": "https://github.com/huggingface/datasets/pull/4186", "diff_url": "https://github.com/huggingface/datasets/pull/4186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4186.patch", "merged_at": 1650631711000 }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4186/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4185/comments
https://api.github.com/repos/huggingface/datasets/issues/4185/events
https://github.com/huggingface/datasets/issues/4185
1,209,429,743
I_kwDODunzps5IFm7v
4,185
Librispeech documentation, clarification on format
{ "login": "albertz", "id": 59132, "node_id": "MDQ6VXNlcjU5MTMy", "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertz", "html_url": "https://github.com/albertz", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "organizations_url": "https://api.github.com/users/albertz/orgs", "repos_url": "https://api.github.com/users/albertz/repos", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "received_events_url": "https://api.github.com/users/albertz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "(@patrickvonplaten )", "Also cc @lhoestq here", "The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds[\"audio\"][\"array\"][0]` as this will decode all dataset samples, but instead `ds[0][\"audio\"][\"array\"]` see: https://huggingface.co/docs/datasets/audio_process#audio-datasets\r\n\r\n", "So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?\r\n\r\nAnd is it simple to also store it re-encoded as ogg or mp3 instead?\r\n", "Hey, \r\n\r\nSorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path of the audio files with the `path` parameter.\r\n\r\nI'm currently changing this for Librispeech here: https://github.com/huggingface/datasets/pull/4184 .\r\nYou should be able to see the audio file in the original `flac` format under `path` then. I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ? ", "> I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?\r\n\r\nSure, I would expect that `load_dataset(\"librispeech_asr\")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding logic would be some separate generic function. So I could do sth like `dataset.reencode_as_ogg(**ogg_encode_opts).save_to_disk(...)` or so.\r\n", "A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https://github.com/huggingface/datasets/pull/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.\r\n\r\nSo, instead of `save_to_disk`/`load_from_disk`, we would use `to_parquet`,`from_parquet`? Is there any downside? Are arrow files more efficient?\r\n\r\nRelated is also the doc update in #4193.\r\n", "`save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.\r\n\r\nTherefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.\r\n\r\nParquet files are used for cold storage: to use memory mapping on a Parquet dataset, you first have to convert it to Arrow. We use Parquet to reduce the I/O when pushing/downloading data from the Hugging face Hub. When you load a Parquet file from the Hub, it is converted to Arrow on the fly during the download." ]
1,650,447,355,000
1,650,538,853,000
null
NONE
null
null
null
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53 > Note that in order to limit the required storage for preparing this dataset, the audio > is stored in the .flac format and is not converted to a float32 array. To convert, the audio > file to a float32 array, please make use of the `.map()` function as follows: > > ```python > import soundfile as sf > def map_to_array(batch): > speech_array, _ = sf.read(batch["file"]) > batch["speech"] = speech_array > return batch > dataset = dataset.map(map_to_array, remove_columns=["file"]) > ``` Is this still true? In my case, `ds["train.100"]` returns: ``` Dataset({ features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'], num_rows: 28539 }) ``` and taking the first instance yields: ``` {'file': '374-180298-0000.flac', 'audio': {'path': '374-180298-0000.flac', 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), 'sampling_rate': 16000}, 'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED', 'speaker_id': 374, 'chapter_id': 180298, 'id': '374-180298-0000'} ``` The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong? But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk? Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk? A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4185/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4184/comments
https://api.github.com/repos/huggingface/datasets/issues/4184/events
https://github.com/huggingface/datasets/pull/4184
1,208,592,669
PR_kwDODunzps42cB2j
4,184
[Librispeech] Add 'all' config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Fix https://github.com/huggingface/datasets/issues/4179", "_The documentation is not available anymore as the PR was closed or merged._", "Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n\r\nAnd to get the subsets, I do sth like:\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```\r\n?\r\n", "> Just that I understand: With this change, simply doing `load_dataset(\"librispeech_asr\")` is possible and returns the whole dataset?\r\n> \r\n> And to get the subsets, I do sth like:\r\n> \r\n> ```python\r\n> ds = load_dataset(\"librispeech_asr\")\r\n> train_ds = ds[\"train\"]\r\n> dev_clean_ds = ds[\"dev-clean\"]\r\n> dev_other_ds = ds[\"dev-other\"]\r\n> test_clean_ds = ds[\"test-clean\"]\r\n> test_other_ds = ds[\"test-other\"]\r\n> ```\r\n> \r\n> ?\r\n\r\nYou could do:\r\n\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\", \"all\") # <- note that we have to pass a config\r\ntrain_ds = ds[\"train\"]\r\ndev_clean_ds = ds[\"dev-clean\"]\r\ndev_other_ds = ds[\"dev-other\"]\r\ntest_clean_ds = ds[\"test-clean\"]\r\ntest_other_ds = ds[\"test-other\"]\r\n```", "So, `load_dataset(\"librispeech_asr\")` is not possible, it must be `load_dataset(\"librispeech_asr\", \"all\")`?\r\n\r\nWhy is that?\r\n\r\nThe docs say:\r\n```\r\nname: `str` name, optional configuration for the dataset that affects the data generated on disk. Different\r\n `builder_config`s will have their own subdirectories and versions.\r\n If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n```\r\nhttps://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/src/datasets/builder.py#L228\r\n\r\nOr maybe you could just define `DEFAULT_CONFIG_NAME`?\r\n", "> If not provided, uses the first configuration in self.BUILDER_CONFIGS\r\n\r\nOh crap this is outdated documentation. No it doesn't take the first config by default.\r\n\r\nEDIT: opened a PR to fix this: https://github.com/huggingface/datasets/pull/4186", "> No it doesn't take the first config by default.\r\n\r\nBut defining `DEFAULT_CONFIG_NAME` would work?\r\n\r\nSo should we define `DEFAULT_CONFIG_NAME = \"all\"` here as well? I think this is a reasonable default config.\r\n\r\nDon't most datasets have some default config?\r\n", "> But defining DEFAULT_CONFIG_NAME would work?\r\n>\r\n> So should we define DEFAULT_CONFIG_NAME = \"all\" here as well? I think this is a reasonable default config.\r\n\r\nYes that would work, and I also find it reasonable to do it :)\r\n\r\n> Don't most datasets have some default config?\r\n\r\nMost datasets only have one configuration, so the single configuration is the default one. Then other datasets gave several configurations, and whether they have a default one is decided case-by-case.\r\n\r\ne.g. `glue` is a benchmark and doesn't have a default task, one must choose which task of `glue` they want to use explicitely.", "Thanks a lot for the feedback! \r\n\r\nUsing `\"all\"` now as the default config. I changed the layout a bit so that there is not a single \"train\", but instead we have multiple \"train.clean.100\", \"train.clean.360\", \"train.other.500\". This way we don't even need to do filtering and it's also cleaner IMO.\r\n\r\n@albertz - you should now be able to do the following:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\") # <- run this once to download, prepare dataset and cache everything\r\n\r\n# The following operations will be very fast since all the downloading and processing is already cached\r\ntrain_1 = load_dataset(\"librispeech_asr\", split=\"train.clean.100\")\r\nprint(train_1)\r\ntrain_2 = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360\")\r\nprint(train_2)\r\ntrain_full = load_dataset(\"librispeech_asr\", split=\"train.clean.100+train.clean.360+train.other.500\")\r\nprint(train_full)\r\ndev_clean_ds = load_dataset(\"librispeech_asr\", split=\"validation.clean\")\r\nprint(dev_clean_ds)\r\ndev_other_ds = load_dataset(\"librispeech_asr\", split=\"validation.other\")\r\nprint(dev_other_ds)\r\ntest_clean_ds = load_dataset(\"librispeech_asr\", split=\"test.clean\")\r\nprint(test_clean_ds)\r\ntest_other_ds = load_dataset(\"librispeech_asr\", split=\"test.other\")\r\nprint(test_other_ds)\r\n```\r\n\r\n\r\n", "Think this way we have the best of both worlds. Also @lhoestq, I think we could highlight better in the docs that it's possible to combine different splits. We do this actually quite a lot for speech. For Common Voice many people include \"validation\" in the training if the data is too small, e.g.: https://github.com/huggingface/transformers/blob/ff06b177917384137af2d9585697d2d76c40cdfc/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L147\r\n\r\nShould we maybe add a short section to the loading tutorial here: https://huggingface.co/docs/datasets/v2.1.0/en/loading#hugging-face-hub ? (Happy to do it)", "Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n\r\nNote in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: https://github.com/rwth-i6/i6_core/pull/253)\r\n\r\nSo with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain = ds[\"train\"]\r\n```\r\nOr with your latest proposal, it would look like:\r\n```python\r\nds = datasets.load_from_disk(...)\r\ntrain_ds = datasets.concatenate_datasets(\r\n [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n```\r\nright?\r\n", "> Is there any advantage or difference in calling `load_dataset` multiple times for each split? Or why not just call `load_dataset` once and then access each split?\r\n> \r\n> Note in our case, we cannot really use the caching mechanism because we have a recipe pipeline used by multiple users (and I think a common cache dir for all users might end up in problems) and we basically would use `load_dataset(\"librispeech_asr\").save_to_disk(...)` and then later `load_from_disk(...)`. (See here: [rwth-i6/i6_core#253](https://github.com/rwth-i6/i6_core/pull/253))\r\n> \r\n> So with `load_from_disk`, we cannot really provide the split this way, so we anyway would do sth like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train = ds[\"train\"]\r\n> ```\r\n> \r\n> Or with your latest proposal, it would look like:\r\n> \r\n> ```python\r\n> ds = datasets.load_from_disk(...)\r\n> train_ds = datasets.concatenate_datasets(\r\n> [ds[\"train.clean.100\"], ds[\"train.clean.360\"], ds[\"train.other.500\"]])\r\n> ```\r\n> \r\n> right?\r\n\r\nI see the use case! The only advantage by calling `datasets` multiple times is that one can easily \"merge\" splits with `\"+\"`, but yeah you can do the exact same with `concatenate`.\r\n\r\n@lhoestq what do you think is the best approach with `load_from_disk`? \r\n\r\n@albertz, you could also define the `cache_dir` when doing `load_dataset(...)` which will then put all the relevant `arrow` files int the cache dir that you defined, e.g.:\r\n\r\n```python\r\nload_dataset(\"librispeech_asr\", cache_dir=\"/easy/to/access/directory\")\r\n```", "@albertz, I took a read through https://github.com/rwth-i6/i6_core/pull/253 . \r\n\r\nI think the best would be the following:\r\n\r\n1. Do `ds = load_dataset(..., cache_dir=\"/dir/that/is/easy/to/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n2. Do `ds.save_to_disk(\"local/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after https://github.com/huggingface/datasets/pull/4184#discussion_r854132740 is fixed and can be done for each person individually.\r\n3. `ds = datasets.load_from_disk(\"local/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.", "@lhoestq - I think this one is good to go", "> @albertz, I took a read through [rwth-i6/i6_core#253](https://github.com/rwth-i6/i6_core/pull/253) .\r\n> \r\n> I think the best would be the following:\r\n> \r\n> 1. Do `ds = load_dataset(..., cache_dir=\"/dir/that/is/easy/to/access\")` <- having merged this PR, this will save all the original `.flac` files in the `cache_dir`\r\n> 2. Do `ds.save_to_disk(\"local/path\")` this should then only save the `arrow.format` with a `path` string to the audio files which are located in `cache_dir` <- this won't require a lot of memory after [[Librispeech] Add 'all' config #4184 (comment)](https://github.com/huggingface/datasets/pull/4184#discussion_r854132740) is fixed and can be done for each person individually.\r\n> 3. `ds = datasets.load_from_disk(\"local/path\")` can the be used. An object of `ds` will then have a `path` variable that links to the original audio files in the `cache_dir`. You can change these audio files then easily to `.mp3. You could do this with the `.map(...)` function, e.g. define a function that maps through all audio files, load them and then save them on disk afterward.\r\n\r\nOh, so you say that our current implementation in https://github.com/rwth-i6/i6_core/pull/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of `save_to_disk`. I think it would be good to clarify that in the doc of `save_to_disk`, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nSo, you say we anyway need to share the cache dir among users? But we would want to make sure that after the initial download and preparation of the data, this is set to readonly, because we want to make sure that other people will not modify the data in any way. Right?\r\n\r\nBut then, we don't really need the `save_to_disk` and `load_from_disk` at all, right?\r\n", "@albertz \r\n\r\n> Oh, so you say that our current implementation in https://github.com/rwth-i6/i6_core/pull/253 is broken? Because our cache dir is just some temp directory which will be removed afterwards, and we just store what we get out of save_to_disk. I think it would be good to clarify that in the doc of save_to_disk, that this is not enough and can depend on files from the cache dir. (@dthulke)\r\n\r\nOh, I wasn't aware that audio files are handled this way. Then we should have the cache directory as an additional job output, so that we keep the audio files. \r\n\r\n> So, you say we anyway need to share the cache dir among users?\r\n\r\nNo, the cache dir can still be a directory in the job output folder. Then the audio paths in the corresponding dataset column correspond to the flac files in that directory. This way the \"output\" of the job is contained into the job directory and we don't write files to a global cache directory that is independent of the sisyphus graph.\r\n\r\nIf we want to share the audio data between different users, we can just link to a central instance of the job (similar to how we do it with the `DownloadLibriSpeechCorpusJob`).", "@dthulke - that's a good point actually! So you can do both things:\r\n\r\n1. Convert all audio files to bytes. Bytes can be saved by `arrow` so in this case you can do `save_to_disk(...)`, but then you cannot really inspect the audio files locally as they'll just be saved within a large arrow file (this actually used to be the default case but we're changing this now). The problem of this is summarized here a bit: https://github.com/huggingface/datasets/issues/3663 . You can still do this if you'd like, e.g. you could do:\r\n\r\n```python\r\nds = load_dataset(\"librispeech_asr\")\r\n\r\ndef read_file(batch):\r\n with open(batch[\"file\"], \"r\") as f:\r\n batch[\"bytes\"] = f.read() \r\n return batch\r\n\r\nds = ds.map(read_file)\r\nds.save_to_disk(\"/path\") <- the saved arrow object will now contain everything you need\r\n```\r\n\r\nhowever this is not recommend - it's should be much easier to just save the path to the downloaded audio files.\r\n\r\n2. Not convert audio files to bytes, but just leave them in their original file format. Then only the path to the original files will be save in arrow. This will be the default case. This means that when you do `load_dataset(...)` both the orginal audio data and the arrow file will be saved in the `cache_dir` (which can be saved locally for every user or in a shared cache - we actually use a shared cache quite a bit at Hugging Face). When do you do `save_to_disk(...)` now only the `path` will be saved in `arrow` format (after this PR is merged, you'll see that the `arrow files should be very light weight` meaning that `save_to_disk(...)` can be done for every user, but has a dependency on the `cache_dir` (because the audio files live there).\r\n\r\n=> Now what you could do as well would be to simply move all the audio files to the folder you want (the `save_to_disk(...)` folder) and then change the path of every sample to this folder (maybe with `map(...)`) and then this folder would be self contained. I do however think it's better to just specific a `cache_dir` and re-use `load_dataset(...)` every time instead of `load_from_disk` or `save_to_disk(...)`. Note that you can even pass the relevant cache files to `load_dataset(...)` here: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/loading_methods#datasets.load_dataset.data_files in which case you can be 100% sure that nothing is redownloaded. \r\n\r\nWe discussed storing audio files quite a bit, e.g. see: https://github.com/huggingface/datasets/issues/3663 and had (too many) changes around this topic recently, but we've come to the conclusion that the best is to leave the audio format in the format it was originally (`.flac` for Librispeech) so that the user can easily inspect it / understand the data. Arrow cannot save data is `.flac` so we'll just save a path to the original data. Curious to hear your guys opinion on this as well.", "So what I would suggest here is to do the following:\r\n\r\n1. Do `load_dataset(..., cache_dir=/a/read-only/folder)`\r\n2. \r\n- Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading \r\n\r\nor \r\n\r\n- If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(/some/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `/some/path`", "> So what I would suggest here is to do the following:\r\n> \r\n> 1. Do `load_dataset(..., cache_dir=/a/read-only/folder)`\r\n> \r\n> * Either just re-use `load_dataset(..., cache_dir=...)` which should always re-use the data in the `cache_dir` since the hash of the url matches - so there should never be any duplicated downloading\r\n> \r\n> or\r\n> \r\n> * If you want to store the files in MP3 locally, first convert the files to MP3 in the read-only folder, then take do `ds.save_to_disk(/some/path)` which will save the correct path to the read-only folder to MP3 and then you can easily re-use the small arrow dataset that is saved in `/some/path`\r\n\r\nAlso relevant here: https://github.com/huggingface/datasets/issues/3663", "I also added some documentation about how `save_to_disk` handles audio files here: https://github.com/huggingface/datasets/pull/4193", "> > So, you say we anyway need to share the cache dir among users?\r\n> \r\n> No, the cache dir can still be a directory in the job output folder.\r\n\r\n@dthulke But this is what I mean. When we share the job output folder, it means we share the cache dir among users.\r\n\r\nI wonder if `load_dataset(..., cache_dir=job_output_cache_dir)` is always save to do then, that it really would not modify the `job_output_cache_dir`.\r\n\r\nWe could enforce that by making the `job_output_cache_dir` read-only afterwards. We currently don't do this.\r\n\r\n@patrickvonplaten @dthulke But in any case, we actually prefer the data content to be inside the dataset (the arrow files). Lots of small files would be very problematic for our cache manager. We have one main copy of the data on NFS, but accessing the NFS directly by all computing nodes is not feasible, so the cache manager will have copies of the files on the nodes. So it means, whenever we access some file, we query the cache manager DB whether the file is already cached somewhere (some other computing node) and if so, it copies it from the other computing node and not from NFS. This works very well when there are not too many files (but the files can be big). So, we want to have only a few but big files. Even for NFS access this is much better.\r\n\r\nI also commented in #3663.\r\n", "Hey @albertz @dthulke,\r\n\r\nThanks a lot for your input! \r\n\r\nWe've discussed quite a bit with @lhoestq and we think the best approach is the following:\r\n\r\n\r\na)\r\n`load_dataset(...)` will not store both bytes and the files because this would mean that 3x the size of the dataset would often be needed (1. the compressed `tar.gz` file, 2. the extracted file b, 3. the raw bytes in arrow format). \r\n\r\nFor canonical datasets like librispeech and common voice I think we want to keep the dataset filenames because of i) no breaking changes and ii) reasons explained in #3663\r\n\r\nHowever it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder e.g. this line: https://huggingface.co/datasets/common_voice/blob/main/common_voice.py#L671\r\n\r\nAnd then it'll be allowed to save the bytes and the dataset will be self-contained out-of-the-box when using `load_dataset(...)`\r\n\r\nb) Now, one major problem that you guys uncovered is that `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap. This means that after we've corrected this when you do download the canonical librispeech dataset the following will work:\r\n\r\n```python\r\nds = load_dataset(\"....\") # <- here we have a dependency on the filepathes\r\nds[0][\"audio\"][\"bytes\"] # <- will not work\r\n\r\nds.save_to_disk(\"/local/path\") # <- now we want to have a self-contained dataset in arrow format, so we load the files into bytes and save it in arrow format\r\n\r\n# now you can delete everything besides \"/local/path\"\r\n\r\nds = load_from_disk(\"/local/path\") # <- this will work\r\n```\r\n\r\nSo either option a) where you define your own librispeech data downloading script (you guys could just sign up here: https://huggingface.co/join) and upload a dataset loading script in private mode so that no one can see it and you would always store the audio as bytes or b) where you first load then save to disk then delete cache would work. \r\n\r\nHope that fits in your vision :-)\r\n\r\ncc @lhoestq @mariosasko ", "@patrickvonplaten sounds like a good approach to me. For b) this could even be configurable with a parameter like `embed_external_files` as you have for `push_to_hub` (if people prefer to keep separate audio files).\r\n", "> However it's also trivial to write your own datasetset downloading script of librispeech and just not extract the folder\r\n\r\nI don't exactly understand. In all cases, we need to extract it to prepare the dataset, or not? No matter if we want to store the raw bytes inside the dataset or leaving them as local files. Just in the first case, we can safely delete the extracted files after the dataset preparation.\r\n\r\n> `save_to_disk(...)` is currently not necessarily saving a dataset to be self-contained. We will change that asap.\r\n\r\nFor us, this sounds exactly like what we want.\r\n\r\nBut regarding not introducing breaking changes, wouldn't this maybe also break some setups for users who don't expect this new behavior?\r\n", "@albertz I would suggest to move the discussion on implementation details on our side to the following issue: rwth-i6/i6_core/issues/257", "I like the idea of adding `embed_external_files` and set it to True by default to `save_to_disk`.\r\nIt's indeed a kind of breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice:\r\n1. I like the idea of having it self contained, in case you want to delete your cache\r\n2. users also upload these Arrow files to cloud storage via the `fs` parameter, and in this case they would expect to upload a self-contained dataset\r\n3. consistency with `push_to_hub`\r\n\r\nIf it sounds good to you I'll open an issue to discuss this and track the advancements" ]
1,650,385,676,000
1,650,621,099,000
1,650,620,717,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4184", "html_url": "https://github.com/huggingface/datasets/pull/4184", "diff_url": "https://github.com/huggingface/datasets/pull/4184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4184.patch", "merged_at": 1650620717000 }
Add `"all"` config to Librispeech
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4184/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4183/comments
https://api.github.com/repos/huggingface/datasets/issues/4183/events
https://github.com/huggingface/datasets/pull/4183
1,208,449,335
PR_kwDODunzps42bjXn
4,183
Document librispeech configs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I think the main purpose of #4179 was how to be able to load both configs into one, so should we maybe add this part of the code: https://github.com/huggingface/datasets/issues/4179#issuecomment-1102383717 \r\n\r\nto the doc? \r\n\r\nActually @lhoestq would this work given that they have different split names: https://huggingface.co/datasets/librispeech_asr#data-splits ? ", "This doc extension does not explain why I can't simply load the whole dataset. Or what workaround I need to get the whole dataset, which is what people usually want for Librispeech.", "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq, I can add a `\"all\"` config to Librispeech have the datasets already cached somewhere ", "I'm closing this PR then, feel free to continue the discussion in https://github.com/huggingface/datasets/issues/4179\r\n" ]
1,650,378,419,000
1,650,381,696,000
1,650,381,320,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4183", "html_url": "https://github.com/huggingface/datasets/pull/4183", "diff_url": "https://github.com/huggingface/datasets/pull/4183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4183.patch", "merged_at": null }
Added an example of how to load one config or the other
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4183/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4182/comments
https://api.github.com/repos/huggingface/datasets/issues/4182/events
https://github.com/huggingface/datasets/issues/4182
1,208,285,235
I_kwDODunzps5IBPgz
4,182
Zenodo.org download is not responding
{ "login": "dkajtoch", "id": 32985207, "node_id": "MDQ6VXNlcjMyOTg1MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkajtoch", "html_url": "https://github.com/dkajtoch", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "repos_url": "https://api.github.com/users/dkajtoch/repos", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]", "Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo. You can see this on their website: https://marcobaroni.org/composes/sick.html\r\n\r\nAnd yes, you are right: Zenodo is currently having some incidents and people are reporting problems from it.\r\n\r\nOn the other hand, we could contact the data owners and propose them to host their data at our Hugging Face Hub.\r\n\r\n@julien-c I guess so.\r\n", "Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. ", "Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub.", "Ahhh good point! License is the problem :(" ]
1,650,371,217,000
1,650,438,665,000
1,650,438,665,000
CONTRIBUTOR
null
null
null
## Describe the bug Source download_url from zenodo.org does not respond. `_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"` Other datasets also use zenodo.org to store data and they cannot be downloaded as well. It would be better to actually use more reliable way to store original data like s3 bucket. ## Steps to reproduce the bug ```python load_dataset("sick") ``` ## Expected results Dataset should be downloaded. ## Actual results ConnectionError: Couldn't reach https://zenodo.org/record/2787612/files/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)"))) ## Environment info - `datasets` version: 2.1.0 - Platform: Darwin-21.4.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 7.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4182/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4182/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4181/comments
https://api.github.com/repos/huggingface/datasets/issues/4181/events
https://github.com/huggingface/datasets/issues/4181
1,208,194,805
I_kwDODunzps5IA5b1
4,181
FLEURS
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[ "Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode." ]
1,650,366,596,000
1,650,631,208,000
null
MEMBER
null
null
null
## Dataset viewer issue for '*name of the dataset*' https://huggingface.co/datasets/google/fleurs ``` Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` Am I the one who added this dataset ? Yes Can I fix this somehow in the script? @lhoestq @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4181/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4180/comments
https://api.github.com/repos/huggingface/datasets/issues/4180/events
https://github.com/huggingface/datasets/issues/4180
1,208,042,320
I_kwDODunzps5IAUNQ
4,180
Add some iteration method on a dataset column (specific for inference)
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio\"]` ? Currently it returns a list with all the decoded audio data in memory.\r\n\r\nIt would be a breaking change though, since `isinstance(dataset[\"audio\"], list)` wouldn't work anymore, but we could implement a `Sequence` so that `dataset[\"audio\"][0]` still works and only loads one item in memory.\r\n\r\nYour alternative suggestion with `iterate` is also sensible, though maybe less satisfactory in terms of experience IMO", "I agree that current behavior (decoding all audio file sin the dataset when accessing `dataset[\"audio\"]`) is not useful, IMHO. Indeed in our docs, we are constantly warning our collaborators not to do that.\r\n\r\nTherefore I upvote for a \"useful\" behavior of `dataset[\"audio\"]`. I don't think the breaking change is important in this case, as I guess no many people use it with its current behavior. Therefore, for me it seems reasonable to return a generator (instead of an in-memeory list) for \"special\" features, like Audio/Image.\r\n\r\n@lhoestq on the other hand I don't understand your proposal about Pandas-like... ", "I recall I had the same idea while working on the `Image` feature, so I agree implementing something similar to `pd.Series` that lazily brings elements in memory would be beneficial.", "@lhoestq @mariosasko Could you please give a link to that new feature of `pandas.Series`? As far as I remember since I worked with pandas for more than 6 years, there was no lazy in-memory feature; it was everything in-memory; that was the reason why other frameworks were created, like Vaex or Dask, e.g. ", "Yea pandas doesn't do lazy loading. I was referring to pandas.Series to say that they have a dedicated class to represent a column ;)" ]
1,650,359,745,000
1,650,537,058,000
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset. Having an iterator (or sequence) type of object, would make inference with `transformers` 's `pipeline` easier to use and not so memory hungry. **Describe the solution you'd like** A clear and concise description of what you want to happen. For a non breaking change: ```python for audio in dataset.iterate("audio"): # {"array": np.array(...), "sampling_rate":...} ``` For a breaking change solution (not necessary), changing the type of `dataset["audio"]` to a sequence type so that ```python pipe = pipeline(model="...") for out in pipe(dataset["audio"]): # {"text":....} ``` could work **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. ```python def iterate(dataset, key): for item in dataset: yield dataset[key] for out in pipeline(iterate(dataset, "audio")): # {"array": ...} ``` This works but requires the helper function which feels slightly clunky. **Additional context** Add any other context about the feature request here. The context is actually to showcase better integration between `pipeline` and `datasets` in the Quicktour demo: https://github.com/huggingface/transformers/pull/16723/files @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4180/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4179/comments
https://api.github.com/repos/huggingface/datasets/issues/4179/events
https://github.com/huggingface/datasets/issues/4179
1,208,001,118
I_kwDODunzps5IAKJe
4,179
Dataset librispeech_asr fails to load
{ "login": "albertz", "id": 59132, "node_id": "MDQ6VXNlcjU5MTMy", "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertz", "html_url": "https://github.com/albertz", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "organizations_url": "https://api.github.com/users/albertz/orgs", "repos_url": "https://api.github.com/users/albertz/repos", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "received_events_url": "https://api.github.com/users/albertz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "@patrickvonplaten Hi! I saw that you prepared this? :)", "Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https://github.com/tensorflow/datasets/issues/3885", "Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ", "Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n", "If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https://huggingface.co/docs/datasets/v2.1.0/en/process#concatenate", "Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n", "cc @lhoestq FYI maybe the docs can be improved here", "Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?", "Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like", "Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183", "> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n", "+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs", "Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)", "I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n", "Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/main_classes#datasets.Dataset.filter )", "But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```" ]
1,650,357,948,000
1,650,386,186,000
null
NONE
null
null
null
## Describe the bug The dataset librispeech_asr (standard Librispeech) fails to load. ## Steps to reproduce the bug ```python datasets.load_dataset("librispeech_asr") ``` ## Expected results It should download and prepare the whole dataset (all subsets). In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other). However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want. Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training. ## Actual results ``` ... File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators line: archive_path = dl_manager.download(_DL_URLS[self.config.name]) locals: archive_path = <not found> dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160> dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>> _DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'... self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310> self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None) self.config.name = <local> 'default', len = 7 KeyError: 'default' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31 - Python version: 3.9.9 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4179/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4178/comments
https://api.github.com/repos/huggingface/datasets/issues/4178/events
https://github.com/huggingface/datasets/pull/4178
1,207,787,073
PR_kwDODunzps42ZfFN
4,178
[feat] Add ImageNet dataset
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4178). All of your documentation changes will be reflected on that endpoint.", "Thanks for the comments. I believe I have addressed all of them and also decreased the size of the dummy data file, so it should be ready for a re-review. I also made a change to allow adding synset mapping and valprep script in config in case we add ImageNet 21k some time later. ", "@lhoestq I have updated the PR to address all of the review comments." ]
1,650,348,095,000
1,650,574,034,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4178", "html_url": "https://github.com/huggingface/datasets/pull/4178", "diff_url": "https://github.com/huggingface/datasets/pull/4178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4178.patch", "merged_at": null }
To use the dataset download the tar file [imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using: ```py from datasets import load_dataset dataset = load_dataset("imagenet", data_dir="/path/to/imagenet_object_localization_patched2019.tar.gz") ``` Currently train and validation splits are supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4178/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4177
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4177/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4177/comments
https://api.github.com/repos/huggingface/datasets/issues/4177/events
https://github.com/huggingface/datasets/pull/4177
1,207,535,920
PR_kwDODunzps42Yxca
4,177
Adding missing subsets to the `SemEval-2018 Task 1` dataset
{ "login": "micahcarroll", "id": 11460267, "node_id": "MDQ6VXNlcjExNDYwMjY3", "avatar_url": "https://avatars.githubusercontent.com/u/11460267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/micahcarroll", "html_url": "https://github.com/micahcarroll", "followers_url": "https://api.github.com/users/micahcarroll/followers", "following_url": "https://api.github.com/users/micahcarroll/following{/other_user}", "gists_url": "https://api.github.com/users/micahcarroll/gists{/gist_id}", "starred_url": "https://api.github.com/users/micahcarroll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/micahcarroll/subscriptions", "organizations_url": "https://api.github.com/users/micahcarroll/orgs", "repos_url": "https://api.github.com/users/micahcarroll/repos", "events_url": "https://api.github.com/users/micahcarroll/events{/privacy}", "received_events_url": "https://api.github.com/users/micahcarroll/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,650,322,770,000
1,650,323,169,000
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4177", "html_url": "https://github.com/huggingface/datasets/pull/4177", "diff_url": "https://github.com/huggingface/datasets/pull/4177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4177.patch", "merged_at": null }
This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another subtask (subtask 2), which is comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, broken down by emotions (anger, fear, joy, sadness). ## Remaining questions I wasn't able to find any documentation about how one should make PRs to modify datasets. Because of that, I just did my best to integrate the new data into the code, and tested locally that this worked. I'm sorry if I'm not respecting your contributing guidelines – if they are documented somewhere, I'd appreciate if you could send a pointer! Not sure how `dataset_infos.json` and `dummy` should be updated. My understanding is that they were automatically generated at the time of the original dataset creation?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4177/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4176/comments
https://api.github.com/repos/huggingface/datasets/issues/4176/events
https://github.com/huggingface/datasets/issues/4176
1,206,515,563
I_kwDODunzps5H6fdr
4,176
Very slow between two operations
{ "login": "yananchen1989", "id": 26405281, "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yananchen1989", "html_url": "https://github.com/yananchen1989", "followers_url": "https://api.github.com/users/yananchen1989/followers", "following_url": "https://api.github.com/users/yananchen1989/following{/other_user}", "gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}", "starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions", "organizations_url": "https://api.github.com/users/yananchen1989/orgs", "repos_url": "https://api.github.com/users/yananchen1989/repos", "events_url": "https://api.github.com/users/yananchen1989/events{/privacy}", "received_events_url": "https://api.github.com/users/yananchen1989/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,650,239,549,000
1,650,240,180,000
1,650,240,180,000
NONE
null
null
null
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores. Also, there is a significant lag between them. Am I missing something ? ``` raw_datasets = raw_datasets.map(split_func, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.overwrite_cache, desc = "running split para ==>")\ .filter(lambda example: example['text1']!='' and example['text2']!='', num_proc=args.preprocessing_num_workers, desc="filtering ==>") processed_datasets = raw_datasets.map( preprocess_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, desc="Running tokenizer on dataset===>", ) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4176/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4175/comments
https://api.github.com/repos/huggingface/datasets/issues/4175/events
https://github.com/huggingface/datasets/pull/4175
1,205,589,842
PR_kwDODunzps42SqF-
4,175
Add WIT Dataset
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4175). All of your documentation changes will be reflected on that endpoint.", "Hi! Coming in late with some context.\r\n\r\nThere are two versions of the WIT dataset:\r\n1. The original source dataset managed by Wikimedia. It has more information, raw image representations, and each row corresponds to an image linked to all of its captions wherever it happens in Wikipedia (in multiple languages)\r\n2. The Google version, corresponding to the data script in this PR, which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of `urllib` could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nThe Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https://github.com/huggingface/datasets/pull/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nHow do you want to move forward? IMO the best way would be to have a WIT dataset under the Wikimedia org with both configurations, but it depends on everyone's timelines", "Okay after offline discussion. We'll improve this versions and push it to the hub under `google` namespace. \r\n\r\n> which duplicates image instances and requires the user to download the images themselves from the provided URL (note that a basic implementation will have them download the same picture several time. @thomasw21 using our download manager instead of urllib could help with that, but it wouldn't be required if people had access to the first version)\r\n\r\nAh interesting wasn't aware of this duplication issue, concretely it'll just mean that our dataset in bigger than expected ... I think this should be handled after this loading script (though I have to figure our how to spawn a dl_manager).\r\n\r\n> The Wikimedia folks were really interested in us hosting a ready-to-go streaming version of this dataset where users don't have to download the version themselves, which is why we have the pre-processed versions on an HF bucket, with the raw images and a pre-computed embedding (don't remember the model, we can keep it ). That's the data script currently in https://github.com/huggingface/datasets/pull/2981 . It's nearly ready to go, the one thing we should do is move the raw data from our HF google Cloud bucket to the Hub.\r\n\r\nSimilarly a script will be written and pushed to `wikimedia` organisation." ]
1,650,030,152,000
1,650,632,937,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4175", "html_url": "https://github.com/huggingface/datasets/pull/4175", "diff_url": "https://github.com/huggingface/datasets/pull/4175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4175.patch", "merged_at": null }
closes #2981 #2810 @nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4175/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4174/comments
https://api.github.com/repos/huggingface/datasets/issues/4174/events
https://github.com/huggingface/datasets/pull/4174
1,205,575,941
PR_kwDODunzps42SnJS
4,174
Fix when map function modifies input in-place
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,650,028,995,000
1,650,034,327,000
1,650,033,958,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4174", "html_url": "https://github.com/huggingface/datasets/pull/4174", "diff_url": "https://github.com/huggingface/datasets/pull/4174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4174.patch", "merged_at": 1650033958000 }
When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4174/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4173/comments
https://api.github.com/repos/huggingface/datasets/issues/4173/events
https://github.com/huggingface/datasets/pull/4173
1,204,657,114
PR_kwDODunzps42Ppnd
4,173
Stream private zipped images
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4173). All of your documentation changes will be reflected on that endpoint.", "oops looks like some tests are failing sorry, will fix them tomorrow\r\n\r\nEDIT: not today but asap hopefully" ]
1,649,949,307,000
1,650,382,181,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4173", "html_url": "https://github.com/huggingface/datasets/pull/4173", "diff_url": "https://github.com/huggingface/datasets/pull/4173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4173.patch", "merged_at": null }
As mentioned in https://github.com/huggingface/datasets/issues/4139 it's currently not possible to stream private/gated zipped images from the Hub. This is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository. In this PR I added authentication to `Image.decode_example` via a `token_per_repo_id` optional argument. I first wanted to just pass `use_auth_token` but a single `Image` instance can be responsible of decoding images from a combination several datasets together (from `interleave_datasets` for example). Therefore I just used a dictionary `repo_id` -> `token` instead. I'm getting the `repo_id` from the dataset builder (I replaced the `namepace` attribute with `repo_id`) I did the same for `Audio.decode_example` cc @SBrandeis @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4173/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4172/comments
https://api.github.com/repos/huggingface/datasets/issues/4172/events
https://github.com/huggingface/datasets/pull/4172
1,204,433,160
PR_kwDODunzps42O7LW
4,172
Update assin2 dataset_infos.json
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,937,186,000
1,650,034,062,000
1,650,033,682,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4172", "html_url": "https://github.com/huggingface/datasets/pull/4172", "diff_url": "https://github.com/huggingface/datasets/pull/4172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4172.patch", "merged_at": 1650033682000 }
Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4172/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4170/comments
https://api.github.com/repos/huggingface/datasets/issues/4170/events
https://github.com/huggingface/datasets/pull/4170
1,204,413,620
PR_kwDODunzps42O2-L
4,170
to_tf_dataset rewrite
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4170). All of your documentation changes will be reflected on that endpoint.", "[Magic is now banned](https://www.youtube.com/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!", "@gante I renamed the default collator to `minimal_tf_collate_fn`!" ]
1,649,935,858,000
1,650,557,194,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4170", "html_url": "https://github.com/huggingface/datasets/pull/4170", "diff_url": "https://github.com/huggingface/datasets/pull/4170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4170.patch", "merged_at": null }
This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are: - Much better stability and no more dropping unexpected column names (Sorry @NielsRogge) - Doesn't clobber custom transforms on the data (Sorry @NielsRogge again) - Much better handling of the situation when the `collate_fn` adds columns that aren't in the dataset. - Better inference of shapes and data types - Lots of hacky special-casing code removed - Can return string columns (as `tf.String`) - Most arguments have default values, calling the method should be much simpler - ~~Can accept a `model` argument and only return columns that are valid inputs to that model~~ - Drops the `dummy_labels` argument - this was a workaround for Keras issues that have been resolved by changes in `transformers`. Also remove it from tests and the Overview notebook. I still have a couple of TODOs remaining and some testing to do, so don't merge yet, but it should be mostly ready for review at this point!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4170/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4169/comments
https://api.github.com/repos/huggingface/datasets/issues/4169/events
https://github.com/huggingface/datasets/issues/4169
1,203,995,869
I_kwDODunzps5Hw4Td
4,169
Timit_asr dataset cannot be previewed recently
{ "login": "YingLi001", "id": 75192317, "node_id": "MDQ6VXNlcjc1MTkyMzE3", "avatar_url": "https://avatars.githubusercontent.com/u/75192317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YingLi001", "html_url": "https://github.com/YingLi001", "followers_url": "https://api.github.com/users/YingLi001/followers", "following_url": "https://api.github.com/users/YingLi001/following{/other_user}", "gists_url": "https://api.github.com/users/YingLi001/gists{/gist_id}", "starred_url": "https://api.github.com/users/YingLi001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YingLi001/subscriptions", "organizations_url": "https://api.github.com/users/YingLi001/orgs", "repos_url": "https://api.github.com/users/YingLi001/repos", "events_url": "https://api.github.com/users/YingLi001/events{/privacy}", "received_events_url": "https://api.github.com/users/YingLi001/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for reporting. The bug has already been detected, and we hope to fix it soon.", "TIMIT is now a dataset that requires manual download, see #4145 \r\n\r\nTherefore it might take a bit more time to fix it", "> TIMIT is now a dataset that requires manual download, see #4145\r\n> \r\n> Therefore it might take a bit more time to fix it\r\n\r\nThank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in the list. So I am a little bit confused. If *'timit_asr'* need to be manually downloaded, does that mean we can **not** automatically download it **any more** in the future?", "Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset(\"timir_asr\", data_dir=\"path/to/extracted/data\")`\r\n\r\nThe URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the dataset owners (LDC) notified us about it." ]
1,649,906,911,000
1,649,941,702,000
null
NONE
null
null
null
## Dataset viewer issue for '*timit_asr*' **Link:** *https://huggingface.co/datasets/timit_asr* Issue: The timit-asr dataset cannot be previewed recently. Am I the one who added this dataset ? Yes-No No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4169/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4168/comments
https://api.github.com/repos/huggingface/datasets/issues/4168/events
https://github.com/huggingface/datasets/pull/4168
1,203,867,540
PR_kwDODunzps42NL6F
4,168
Add code examples to API docs
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4168). All of your documentation changes will be reflected on that endpoint.", "> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r\n\r\nI think it's ok to be repetitive to get more clarity. Many users come from `transformers` and may have little experience with some processing methods (especially torch users).\r\n\r\n> Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n\r\nMaybe let's do it case by case, depending on whether there are parameters that are likely to be used often ?\r\n\r\n> For the class_encode_column function, let me know if there is a simpler dataset with fewer columns (currently using winograd_wsc) so it is easier for users to see what changed.\r\n\r\nYou can try with `boolq`, it has a boolean column that can be converted to labels\r\n\r\n> Where possible, I try to show the input before and the output after using a function like flatten for example. Do you think this is too much and just showing the usage (ie, >>> ds.flatten()) will be sufficient?\r\n\r\nNo I don't think it's too much, it's nice this way thanks :)", "Updated each code example so they are fully reproducible (where applicable)! The next step will be to identify some functions where we can show off some parameters that are useful or commonly used. Some useful parameters can be:\r\n\r\n- use `map(batched=True)` to process batches of examples.\r\n- set a seed in `shuffle`.\r\n- set `shuffle` and `seed` in `train_test_split`.\r\n\r\nLet me know if you think of anything else related to the functions in `arrow_dataset.py`!", "Cool thanks ! I think you can also do `num_proc` for `map`" ]
1,649,891,018,000
1,650,488,923,000
null
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4168", "html_url": "https://github.com/huggingface/datasets/pull/4168", "diff_url": "https://github.com/huggingface/datasets/pull/4168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4168.patch", "merged_at": null }
This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on: - Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer. Personally, I think we might be able to get away with not including this since users probably want to try the function on their own dataset. For example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="validation") >>> code example goes here ``` - Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)? - For the `class_encode_column` function, let me know if there is a simpler dataset with fewer columns (currently using `winograd_wsc`) so it is easier for users to see what changed. - Where possible, I try to show the input before and the output after using a function like `flatten` for example. Do you think this is too much and just showing the usage (ie, `>>> ds.flatten()`) will be sufficient? Thanks :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4168/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4168/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4167/comments
https://api.github.com/repos/huggingface/datasets/issues/4167/events
https://github.com/huggingface/datasets/pull/4167
1,203,761,614
PR_kwDODunzps42M1O5
4,167
Avoid rate limit in update hub repositories
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I also set GIT_LFS_SKIP_SMUDGE=1 to speed up git clones", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,881,937,000
1,649,883,401,000
1,649,883,032,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4167", "html_url": "https://github.com/huggingface/datasets/pull/4167", "diff_url": "https://github.com/huggingface/datasets/pull/4167.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4167.patch", "merged_at": 1649883032000 }
use http.extraHeader to avoid rate limit
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4167/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4166/comments
https://api.github.com/repos/huggingface/datasets/issues/4166/events
https://github.com/huggingface/datasets/pull/4166
1,203,758,004
PR_kwDODunzps42M0dS
4,166
Fix exact match
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4166). All of your documentation changes will be reflected on that endpoint." ]
1,649,881,686,000
1,649,882,309,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4166", "html_url": "https://github.com/huggingface/datasets/pull/4166", "diff_url": "https://github.com/huggingface/datasets/pull/4166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4166.patch", "merged_at": null }
Clarify docs and add clarifying example to the exact_match metric
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4166/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4165/comments
https://api.github.com/repos/huggingface/datasets/issues/4165/events
https://github.com/huggingface/datasets/pull/4165
1,203,730,187
PR_kwDODunzps42MubF
4,165
Fix google bleu typos, examples
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4165). All of your documentation changes will be reflected on that endpoint." ]
1,649,879,994,000
1,649,880,628,000
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4165", "html_url": "https://github.com/huggingface/datasets/pull/4165", "diff_url": "https://github.com/huggingface/datasets/pull/4165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4165.patch", "merged_at": null }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4165/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4164/comments
https://api.github.com/repos/huggingface/datasets/issues/4164/events
https://github.com/huggingface/datasets/pull/4164
1,203,661,346
PR_kwDODunzps42MfxX
4,164
Fix duplicate key in multi_news
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,649,875,704,000
1,649,883,856,000
1,649,883,482,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4164", "html_url": "https://github.com/huggingface/datasets/pull/4164", "diff_url": "https://github.com/huggingface/datasets/pull/4164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4164.patch", "merged_at": 1649883482000 }
To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4164/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/4163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4163/comments
https://api.github.com/repos/huggingface/datasets/issues/4163/events
https://github.com/huggingface/datasets/issues/4163
1,203,539,268
I_kwDODunzps5HvI1E
4,163
Optional Content Warning for Datasets
{ "login": "TristanThrush", "id": 20826878, "node_id": "MDQ6VXNlcjIwODI2ODc4", "avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TristanThrush", "html_url": "https://github.com/TristanThrush", "followers_url": "https://api.github.com/users/TristanThrush/followers", "following_url": "https://api.github.com/users/TristanThrush/following{/other_user}", "gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}", "starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions", "organizations_url": "https://api.github.com/users/TristanThrush/orgs", "repos_url": "https://api.github.com/users/TristanThrush/repos", "events_url": "https://api.github.com/users/TristanThrush/events{/privacy}", "received_events_url": "https://api.github.com/users/TristanThrush/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,649,867,881,000
1,649,867,881,000
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild I'm wondering if there is an option to select a content warning message that appears before the dataset preview? Otherwise, people immediately see hate speech when clicking on this dataset. **Describe the solution you'd like** A clear and concise description of what you want to happen. Implementation of a content warning message that separates users from the dataset preview until they click out of the warning. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. Possibly just a way to remove the dataset preview completely? I think I like the content warning option better, though. **Additional context** Add any other context about the feature request here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4163/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/4162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4162/comments
https://api.github.com/repos/huggingface/datasets/issues/4162/events
https://github.com/huggingface/datasets/pull/4162
1,203,421,909
PR_kwDODunzps42LtGO
4,162
Add Conceptual 12M
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks like your dummy_data.zip file is not in the right location ;)\r\ndatasets/datasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip\r\n->\r\ndatasets/conceptual_12m/dummy/default/0.0.0/dummy_data.zip" ]
1,649,861,843,000
1,650,010,381,000
1,650,009,985,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4162", "html_url": "https://github.com/huggingface/datasets/pull/4162", "diff_url": "https://github.com/huggingface/datasets/pull/4162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4162.patch", "merged_at": 1650009985000 }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4162/timeline
null
true
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
14