url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1539/comments | https://api.github.com/repos/huggingface/datasets/issues/1539/events | https://github.com/huggingface/datasets/pull/1539 | 765,338,910 | MDExOlB1bGxSZXF1ZXN0NTM4OTQyMTU4 | 1,539 | Added Wiki Asp dataset | [] | closed | false | null | 3 | 2020-12-13T12:18:34Z | 2020-12-22T10:16:01Z | 2020-12-22T10:16:01Z | null | Hello,
I have added Wiki Asp dataset. Please review the PR. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1539/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1539.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1539",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1539.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1539"
} | true | [
"> Awesome thank you !\r\n> \r\n> I just left one comment.\r\n> \r\n> Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> \r\n> To do so feel free to take a look inside them and in the jsonl files only keep 1 or 2 samples instead of 5 and also remove big chunks of text to only keep a few passages.\r\n\r\nThanks, I have updated the dummy data to keep each domain <20/30KB.",
"> > Awesome thank you !\r\n> > I just left one comment.\r\n> > Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> > Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> > To do so feel free to take a look inside them and in the jsonl files only keep 1 or 2 samples instead of 5 and also remove big chunks of text to only keep a few passages.\r\n> \r\n> Thanks, I have updated the dummy data to keep each domain <20/30KB.\r\n\r\nLooks like this branch has other commits. I will open a new PR with suggested changes.",
"opened a new PR #1612 "
] |
https://api.github.com/repos/huggingface/datasets/issues/2654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2654/comments | https://api.github.com/repos/huggingface/datasets/issues/2654/events | https://github.com/huggingface/datasets/issues/2654 | 945,167,231 | MDU6SXNzdWU5NDUxNjcyMzE= | 2,654 | Give a user feedback if the dataset he loads is streamable or not | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2021-07-15T09:07:27Z | 2021-08-02T11:03:21Z | null | null | **Is your feature request related to a problem? Please describe.**
I would love to know if a `dataset` is with the current implementation streamable or not.
**Describe the solution you'd like**
We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive.
**Describe alternatives you've considered**
Add a new metadata tag for "streaming"
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2654/timeline | null | null | null | null | false | [
"#self-assign",
"I understand it already raises a `NotImplementedError` exception, eg:\r\n\r\n```\r\n>>> dataset = load_dataset(\"journalists_questions\", name=\"plain_text\", split=\"train\", streaming=True)\r\n\r\n[...]\r\nNotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5875/comments | https://api.github.com/repos/huggingface/datasets/issues/5875/events | https://github.com/huggingface/datasets/issues/5875 | 1,716,770,394 | I_kwDODunzps5mU9Za | 5,875 | Why split slicing doesn't behave like list slicing ? | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | open | false | null | 1 | 2023-05-19T07:21:10Z | 2023-05-23T16:02:14Z | null | null | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5875/timeline | null | null | null | null | false | [
"A duplicate of https://github.com/huggingface/datasets/issues/1774"
] |
https://api.github.com/repos/huggingface/datasets/issues/1005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1005/comments | https://api.github.com/repos/huggingface/datasets/issues/1005/events | https://github.com/huggingface/datasets/pull/1005 | 755,337,255 | MDExOlB1bGxSZXF1ZXN0NTMxMDY3Mjc5 | 1,005 | Adding Autshumato South african langages: | [] | closed | false | null | 0 | 2020-12-02T14:47:33Z | 2020-12-03T13:13:30Z | 2020-12-03T13:13:30Z | null | https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1005/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1005.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1005",
"merged_at": "2020-12-03T13:13:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1005.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1005"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5063/comments | https://api.github.com/repos/huggingface/datasets/issues/5063/events | https://github.com/huggingface/datasets/pull/5063 | 1,395,895,463 | PR_kwDODunzps5AHasG | 5,063 | Align signature of list_repo_files with latest hfh | [] | closed | false | null | 1 | 2022-10-04T08:51:46Z | 2022-10-07T16:42:57Z | 2022-10-07T16:40:16Z | null | This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`.
This is already the case for `dataset_info`.
CC: @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5063/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5063",
"merged_at": "2022-10-07T16:40:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5063"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4455/comments | https://api.github.com/repos/huggingface/datasets/issues/4455/events | https://github.com/huggingface/datasets/pull/4455 | 1,263,089,067 | PR_kwDODunzps45O5F9 | 4,455 | Update data URLs in fever dataset | [] | closed | false | null | 1 | 2022-06-07T10:40:54Z | 2022-06-08T07:24:54Z | 2022-06-08T07:16:17Z | null | As stated in their website, data owners updated their URLs on 28/04/2022.
This PR updates the data URLs.
Fix #4452. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4455/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4455/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4455.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4455",
"merged_at": "2022-06-08T07:16:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4455.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4455"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4815/comments | https://api.github.com/repos/huggingface/datasets/issues/4815/events | https://github.com/huggingface/datasets/issues/4815 | 1,334,078,303 | I_kwDODunzps5PhGtf | 4,815 | Outdated loading script for OPUS ParaCrawl dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 0 | 2022-08-10T05:12:34Z | 2022-08-12T14:17:57Z | 2022-08-12T14:17:57Z | null | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4815/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2349/comments | https://api.github.com/repos/huggingface/datasets/issues/2349/events | https://github.com/huggingface/datasets/pull/2349 | 888,586,018 | MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3 | 2,349 | Update task_ids for Ascent KB | [] | closed | false | null | 0 | 2021-05-11T20:44:33Z | 2021-05-17T10:53:14Z | 2021-05-17T10:48:34Z | null | This "other-other-knowledge-base" task is better suited for the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2349/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2349.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2349",
"merged_at": "2021-05-17T10:48:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2349.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2349"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1082/comments | https://api.github.com/repos/huggingface/datasets/issues/1082/events | https://github.com/huggingface/datasets/pull/1082 | 756,676,218 | MDExOlB1bGxSZXF1ZXN0NTMyMTg3ODg3 | 1,082 | Myanmar news dataset | [] | closed | false | null | 1 | 2020-12-03T23:39:00Z | 2020-12-04T10:13:38Z | 2020-12-04T10:13:38Z | null | Add news topic classification dataset in Myanmar / Burmese languagess
This data was collected in 2017 by Aye Hninn Khine , and published on GitHub with a GPL license
https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1082/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1082",
"merged_at": "2020-12-04T10:13:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1082"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/1464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1464/comments | https://api.github.com/repos/huggingface/datasets/issues/1464/events | https://github.com/huggingface/datasets/pull/1464 | 761,533,566 | MDExOlB1bGxSZXF1ZXN0NTM2MTg3MDA0 | 1,464 | Reddit jokes | [] | closed | false | null | 2 | 2020-12-10T19:15:19Z | 2020-12-10T20:14:00Z | 2020-12-10T20:14:00Z | null | 196k Reddit Jokes dataset
Dataset link- https://raw.githubusercontent.com/taivop/joke-dataset/master/reddit_jokes.json | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1464/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1464/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1464.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1464",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1464.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1464"
} | true | [
"@lhoestq would you please rerun the test, ",
"I re-started the test.\r\n\r\n@lhoestq let's hold off on merging for now though, having a conversation on Slack about some of the offensive content in the dataset and how/whether we want to present it."
] |
https://api.github.com/repos/huggingface/datasets/issues/6048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6048/comments | https://api.github.com/repos/huggingface/datasets/issues/6048/events | https://github.com/huggingface/datasets/issues/6048 | 1,809,629,346 | I_kwDODunzps5r3MCi | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | [] | closed | false | null | 1 | 2023-07-18T10:16:34Z | 2023-07-18T16:18:39Z | 2023-07-18T16:18:39Z | null | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
--------------------------------------------------
My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.
### Steps to reproduce the bug
1
### Expected behavior
no error when i use the load_dataset func
### Environment info
python=3.8.15 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6048/timeline | null | completed | null | null | false | [
"The `audiofolder` loader is not available in version `2.3.2`, hence the error. Please run the `pip install -U datasets` command to update the `datasets` installation to make `load_dataset(\"audiofolder\", ...)` work."
] |
https://api.github.com/repos/huggingface/datasets/issues/3834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3834/comments | https://api.github.com/repos/huggingface/datasets/issues/3834/events | https://github.com/huggingface/datasets/pull/3834 | 1,160,657,937 | PR_kwDODunzps40ATVw | 3,834 | Fix dead dataset scripts creation link. | [] | closed | false | null | 0 | 2022-03-06T16:45:48Z | 2022-03-07T12:12:07Z | 2022-03-07T12:12:07Z | null | Previous link gives 404 error. Updated with a new dataset scripts creation link. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3834/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3834",
"merged_at": "2022-03-07T12:12:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3834"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2652/comments | https://api.github.com/repos/huggingface/datasets/issues/2652/events | https://github.com/huggingface/datasets/pull/2652 | 944,865,924 | MDExOlB1bGxSZXF1ZXN0NjkwMjg0MTI4 | 2,652 | Fix logging docstring | [] | closed | false | null | 0 | 2021-07-14T23:19:58Z | 2021-07-18T11:41:06Z | 2021-07-15T09:57:31Z | null | Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2652/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2652",
"merged_at": "2021-07-15T09:57:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2652"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3410/comments | https://api.github.com/repos/huggingface/datasets/issues/3410/events | https://github.com/huggingface/datasets/pull/3410 | 1,075,815,415 | PR_kwDODunzps4voFG7 | 3,410 | Fix dependencies conflicts in Windows CI after conda update to 4.11 | [] | closed | false | null | 0 | 2021-12-09T17:19:11Z | 2021-12-09T17:36:20Z | 2021-12-09T17:36:19Z | null | For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3410/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3410",
"merged_at": "2021-12-09T17:36:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3410"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2337/comments | https://api.github.com/repos/huggingface/datasets/issues/2337/events | https://github.com/huggingface/datasets/issues/2337 | 881,610,567 | MDU6SXNzdWU4ODE2MTA1Njc= | 2,337 | NonMatchingChecksumError for web_of_science dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-05-09T02:02:02Z | 2021-05-10T13:35:53Z | 2021-05-10T13:35:53Z | null | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results in OSError.
>OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt'
```python
dataset = load_dataset('web_of_science', 'WOS5736')
```
There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985'
datasets 1.6.2
python 3.7.10
Ubuntu 18.04.5 LTS | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2337/timeline | null | completed | null | null | false | [
"I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! "
] |
https://api.github.com/repos/huggingface/datasets/issues/5121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5121/comments | https://api.github.com/repos/huggingface/datasets/issues/5121/events | https://github.com/huggingface/datasets/pull/5121 | 1,410,681,067 | PR_kwDODunzps5A4gUB | 5,121 | Bugfix ignore function when creating new_fingerprint for caching | [] | closed | false | null | 1 | 2022-10-17T00:03:43Z | 2022-10-17T12:39:36Z | 2022-10-17T12:39:36Z | null | maybe fixes: #5109 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5121/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5121",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5121"
} | true | [
"Adding \"function\" to the kwargs to ignore when computing the fingerprint will break `map` caching. Indeed passing two different function would result in two different datasets that have the same fingerprint - and the cache wouldn't be able to distinguish them.\r\n\r\nE.g this code would reload ds1 from the cache insetad of computing the dataset for ds2\r\n```python\r\nds = Dataset.from_dict({\"a\": [1, 2, 3]})\r\nds1 = ds.map(lambda x: {\"b\": 1})\r\nds2 = ds.map(lambda x: {\"b\": 2})\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/2923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2923/comments | https://api.github.com/repos/huggingface/datasets/issues/2923/events | https://github.com/huggingface/datasets/issues/2923 | 997,351,590 | I_kwDODunzps47cmCm | 2,923 | Loading an autonlp dataset raises in normal mode but not in streaming mode | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-09-15T17:44:38Z | 2022-04-12T10:09:40Z | 2022-04-12T10:09:39Z | null | ## Describe the bug
The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False)
## raises an error
load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True)
## does not raise an error
```
## Expected results
Both calls should raise the same error
## Actual results
Call with streaming=False:
```
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s]
Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b
Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split
writer.write_table(table)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "splits" does not exist in table schema'
```
Call with `streaming=False`:
```
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s]
Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s]
```
## Environment info
- `datasets` version: 1.12.1.dev0
- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2923/timeline | null | completed | null | null | false | [
"Closing since autonlp dataset are now supported"
] |
https://api.github.com/repos/huggingface/datasets/issues/1943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1943/comments | https://api.github.com/repos/huggingface/datasets/issues/1943/events | https://github.com/huggingface/datasets/pull/1943 | 816,160,453 | MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0 | 1,943 | Implement Dataset from JSON and JSON Lines | [] | closed | false | null | 11 | 2021-02-25T07:17:33Z | 2021-03-18T09:42:08Z | 2021-03-18T09:42:08Z | null | Implement `Dataset.from_jsonl`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1943/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1943",
"merged_at": "2021-03-18T09:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1943"
} | true | [
"Thanks @lhoestq. I was trying to follow @thomwolf suggestion about integrating that script but as `from_json` method...\r\n> Note that I don't think this is necessary a breaking change, we can still keep the old scripts around\r\n\r\nDo you think there is a better way of doing it?\r\n\r\nI was trying to implement more or less the same logic as in the script, but I confess I assumed the target was in-memory only...",
"Basically, I was trying to reimplement `Json(datasets.ArrowBasedBuilder)._generate_tables`, and no writing to arrow file (I assumed only in-memory usage). I started with the first \"else\" clause... \r\n\r\nI was planning to remove my `_cast_table_to_info_features` and use `paj.read_json(parse_options=...)` instead (like in the script).",
"@lhoestq I am wondering why `keep_in_memory` has no effect for JSON...",
"What's the issue exactly ? Apparently it's correctly passed to as_dataset so I don't find the issue",
"Nevermind @lhoestq, I found where the problem was in my code... I push!",
"<s>merging master into this branch should fix the CI issue :)</s>\r\n\r\nOops I didn't refresh the page sorry ^^'\r\n\r\nLooks all good !",
"Good job ! I think we can merge after the last changes regarding the error message and the docstring above :)",
"@lhoestq Done! And I have also added some tests for the `field` parameter.",
"Let me add some more tests for dict of lists JSON file, please.",
"@lhoestq done! ;)",
"We can merge. Additional work will be done in another PR. ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4760/comments | https://api.github.com/repos/huggingface/datasets/issues/4760/events | https://github.com/huggingface/datasets/issues/4760 | 1,320,878,223 | I_kwDODunzps5OuwCP | 4,760 | Issue with offline mode | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 6 | 2022-07-28T12:45:14Z | 2023-05-11T10:11:48Z | null | null | ## Describe the bug
I can't retrieve a cached dataset with offline mode enabled
## Steps to reproduce the bug
To reproduce my issue, first, you'll need to run a script that will cache the dataset
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "0"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
then, you can try to reload it in offline mode:
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "1"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
## Expected results
I would have expected the 2nd snippet not to return any errors
## Actual results
The 2nd snippet returns:
```
Traceback (most recent call last):
File "/home/lucile_huggingface_co/sandbox/evaluate/test_cache_datasets.py", line 8, in <module>
ds = datasets.load_dataset(ds_name)
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1241, in dataset_module_factory
raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'SaulLu/toy_struc_dataset': Offline mode is enabled.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
Maybe I'm misunderstanding something in the use of the offline mode (see [doc](https://huggingface.co/docs/datasets/v2.4.0/en/loading#offline)), is that the case?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4760/timeline | null | null | null | null | false | [
"Hi @SaulLu, thanks for reporting.\r\n\r\nI think offline mode is not supported for datasets containing only data files (without any loading script). I'm having a look into this...",
"Thanks for your feedback! \r\n\r\nTo give you a little more info, if you don't set the offline mode flag, the script will load the cache. I first noticed this behavior with the `evaluate` library, and while trying to understand the downloading flow I realized that I had a similar error with datasets.",
"This is an issue we have to fix.",
"This is related to https://github.com/huggingface/datasets/issues/3547",
"Still not fixed? ......",
"#5331 will be helpful to fix this, as it updates the cache directory template to be aligned with the other datasets"
] |
https://api.github.com/repos/huggingface/datasets/issues/3068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3068/comments | https://api.github.com/repos/huggingface/datasets/issues/3068/events | https://github.com/huggingface/datasets/pull/3068 | 1,024,681,264 | PR_kwDODunzps4tHhOC | 3,068 | feat: increase streaming retry config | [] | closed | false | null | 1 | 2021-10-13T02:00:50Z | 2021-10-13T09:25:56Z | 2021-10-13T09:25:54Z | null | Increase streaming config parameters:
* retry interval set to 5 seconds
* max retries set to 20 (so 1mn 40s) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3068/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3068",
"merged_at": "2021-10-13T09:25:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3068"
} | true | [
"@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a higher number of retries is for local connections. It would allow for almost 2mn of a wifi/ethernet disconnection. In practice this should not happen very often.\r\n\r\nLet me know if you think it's too much."
] |
https://api.github.com/repos/huggingface/datasets/issues/3298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3298/comments | https://api.github.com/repos/huggingface/datasets/issues/3298/events | https://github.com/huggingface/datasets/issues/3298 | 1,058,420,201 | I_kwDODunzps4_FjXp | 3,298 | Agnews dataset viewer is not working | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 3 | 2021-11-19T11:18:59Z | 2021-12-21T16:24:05Z | 2021-12-21T16:24:05Z | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/ag_news
Hi there, the `ag_news` dataset viewer is not working.
Am I the one who added this dataset? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3298/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting\r\nWe've already fixed the code that generates the preview for this dataset, we'll release the fix soon :)",
"Hi @lhoestq, thanks for your feedback!",
"Fixed in the viewer.\r\n\r\nhttps://huggingface.co/datasets/ag_news"
] |
https://api.github.com/repos/huggingface/datasets/issues/2415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2415/comments | https://api.github.com/repos/huggingface/datasets/issues/2415/events | https://github.com/huggingface/datasets/issues/2415 | 903,923,097 | MDU6SXNzdWU5MDM5MjMwOTc= | 2,415 | Cached dataset not loaded | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2021-05-27T15:40:06Z | 2021-06-02T13:15:47Z | 2021-06-02T13:15:47Z | null | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
return (
batch["duration"] <= 10
and batch["duration"] >= 1
and len(batch["target_text"]) > 5
)
def prepare_dataset(batch):
batch["input_values"] = processor(
batch["speech"], sampling_rate=batch["sampling_rate"][0]
).input_values
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
train_dataset = train_dataset.filter(
filter_by_duration,
remove_columns=["duration"],
num_proc=data_args.preprocessing_num_workers,
)
# PROBLEM HERE -> below function is reexecuted and cache is not loaded
train_dataset = train_dataset.map(
prepare_dataset,
remove_columns=train_dataset.column_names,
batch_size=training_args.per_device_train_batch_size,
batched=True,
num_proc=data_args.preprocessing_num_workers,
)
# Later in script
set_caching_enabled(False)
# apply map on trained model to eval/test sets
```
## Expected results
The cached dataset should always be reloaded.
## Actual results
The function is reexecuted.
I have access to cached files `cache-xxxxx.arrow`.
Is there a way I can somehow load manually 2 versions and see how the hash was created for debug purposes (to know if it's an issue with dataset or function)?
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2415/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2415/timeline | null | completed | null | null | false | [
"It actually seems to happen all the time in above configuration:\r\n* the function `filter_by_duration` correctly loads cached processed dataset\r\n* the function `prepare_dataset` is always reexecuted\r\n\r\nI end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug or limitation here.",
"Hi ! The hash used for caching `map` results is the fingerprint of the resulting dataset. It is computed using three things:\r\n- the old fingerprint of the dataset\r\n- the hash of the function\r\n- the hash of the other parameters passed to `map`\r\n\r\nYou can compute the hash of your function (or any python object) with\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nmy_func = lambda x: x + 1\r\nprint(Hasher.hash(my_func))\r\n```\r\n\r\nIf `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.",
"> If `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.\r\n\r\nYes I think that was the issue.\r\n\r\nFor the hash of the function:\r\n* does it consider just the name or the actual code of the function\r\n* does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)",
"> does it consider just the name or the actual code of the function\r\n\r\nIt looks at the name and the actual code and all variables such as recursively. It uses `dill` to do so, which is based on `pickle`.\r\nBasically the hash is computed using the pickle bytes of your function (computed using `dill` to support most python objects).\r\n\r\n> does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)\r\n\r\nYes it does thanks to recursive pickling.",
"Thanks for these explanations. I'm closing the issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/5549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5549/comments | https://api.github.com/repos/huggingface/datasets/issues/5549/events | https://github.com/huggingface/datasets/pull/5549 | 1,590,836,848 | PR_kwDODunzps5KSsi3 | 5,549 | Apply ruff flake8-comprehension checks | [] | closed | false | null | 2 | 2023-02-19T20:09:28Z | 2023-02-23T14:06:39Z | 2023-02-23T13:59:39Z | null | Fix #5548
Apply ruff's flake8-comprehension checks for better performance, and more readable code. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5549/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5549.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5549",
"merged_at": "2023-02-23T13:59:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5549.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5549"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009598 / 0.011353 (-0.001755) | 0.005115 / 0.011008 (-0.005893) | 0.100100 / 0.038508 (0.061592) | 0.036193 / 0.023109 (0.013083) | 0.296478 / 0.275898 (0.020580) | 0.355997 / 0.323480 (0.032517) | 0.007846 / 0.007986 (-0.000140) | 0.004082 / 0.004328 (-0.000247) | 0.076949 / 0.004250 (0.072699) | 0.044304 / 0.037052 (0.007252) | 0.310775 / 0.258489 (0.052286) | 0.333914 / 0.293841 (0.040073) | 0.037783 / 0.128546 (-0.090763) | 0.012023 / 0.075646 (-0.063623) | 0.333311 / 0.419271 (-0.085961) | 0.047568 / 0.043533 (0.004035) | 0.295567 / 0.255139 (0.040428) | 0.315707 / 0.283200 (0.032507) | 0.102675 / 0.141683 (-0.039008) | 1.471546 / 1.452155 (0.019391) | 1.507991 / 1.492716 (0.015274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208658 / 0.018006 (0.190651) | 0.445026 / 0.000490 (0.444536) | 0.002593 / 0.000200 (0.002393) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026968 / 0.037411 (-0.010444) | 0.108188 / 0.014526 (0.093662) | 0.117965 / 0.176557 (-0.058592) | 0.182769 / 0.737135 (-0.554366) | 0.121671 / 0.296338 (-0.174667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400677 / 0.215209 (0.185468) | 4.012577 / 2.077655 (1.934922) | 1.821324 / 1.504120 (0.317204) | 1.624438 / 1.541195 (0.083244) | 1.731886 / 1.468490 (0.263396) | 0.698089 / 4.584777 (-3.886688) | 3.786165 / 3.745712 (0.040453) | 2.079742 / 5.269862 (-3.190119) | 1.325032 / 4.565676 (-3.240644) | 0.085229 / 0.424275 (-0.339046) | 0.012017 / 0.007607 (0.004410) | 0.511779 / 0.226044 (0.285734) | 5.114358 / 2.268929 (2.845430) | 2.324763 / 55.444624 (-53.119861) | 2.011864 / 6.876477 (-4.864612) | 2.075875 / 2.142072 (-0.066198) | 0.853475 / 4.805227 (-3.951752) | 0.166949 / 6.500664 (-6.333715) | 0.064669 / 0.075469 (-0.010800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230212 / 1.841788 (-0.611576) | 14.942371 / 8.074308 (6.868063) | 14.075795 / 10.191392 (3.884403) | 0.156920 / 0.680424 (-0.523504) | 0.029002 / 0.534201 (-0.505199) | 0.442213 / 0.579283 (-0.137070) | 0.436888 / 0.434364 (0.002524) | 0.519725 / 0.540337 (-0.020613) | 0.604634 / 1.386936 (-0.782303) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007649 / 0.011353 (-0.003704) | 0.005298 / 0.011008 (-0.005710) | 0.076559 / 0.038508 (0.038050) | 0.033723 / 0.023109 (0.010614) | 0.334946 / 0.275898 (0.059048) | 0.372785 / 0.323480 (0.049305) | 0.006032 / 0.007986 (-0.001953) | 0.004125 / 0.004328 (-0.000204) | 0.075366 / 0.004250 (0.071116) | 0.049061 / 0.037052 (0.012009) | 0.338188 / 0.258489 (0.079699) | 0.389693 / 0.293841 (0.095852) | 0.037246 / 0.128546 (-0.091301) | 0.012530 / 0.075646 (-0.063116) | 0.088053 / 0.419271 (-0.331219) | 0.049844 / 0.043533 (0.006311) | 0.338476 / 0.255139 (0.083337) | 0.361672 / 0.283200 (0.078473) | 0.101982 / 0.141683 (-0.039701) | 1.479550 / 1.452155 (0.027396) | 1.541031 / 1.492716 (0.048315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226162 / 0.018006 (0.208156) | 0.439108 / 0.000490 (0.438618) | 0.001102 / 0.000200 (0.000902) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030240 / 0.037411 (-0.007171) | 0.113754 / 0.014526 (0.099229) | 0.122839 / 0.176557 (-0.053717) | 0.192531 / 0.737135 (-0.544604) | 0.129455 / 0.296338 (-0.166884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424701 / 0.215209 (0.209492) | 4.208161 / 2.077655 (2.130507) | 2.045733 / 1.504120 (0.541613) | 1.892369 / 1.541195 (0.351174) | 1.997024 / 1.468490 (0.528534) | 0.739883 / 4.584777 (-3.844894) | 3.760939 / 3.745712 (0.015227) | 3.195748 / 5.269862 (-2.074113) | 1.731480 / 4.565676 (-2.834197) | 0.087013 / 0.424275 (-0.337262) | 0.012550 / 0.007607 (0.004943) | 0.540829 / 0.226044 (0.314785) | 5.329933 / 2.268929 (3.061005) | 2.507572 / 55.444624 (-52.937052) | 2.167761 / 6.876477 (-4.708716) | 2.250298 / 2.142072 (0.108226) | 0.868718 / 4.805227 (-3.936510) | 0.181643 / 6.500664 (-6.319021) | 0.064817 / 0.075469 (-0.010653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295001 / 1.841788 (-0.546787) | 15.236413 / 8.074308 (7.162105) | 13.692212 / 10.191392 (3.500820) | 0.186330 / 0.680424 (-0.494094) | 0.017492 / 0.534201 (-0.516709) | 0.427365 / 0.579283 (-0.151919) | 0.427781 / 0.434364 (-0.006583) | 0.533763 / 0.540337 (-0.006575) | 0.636011 / 1.386936 (-0.750925) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2474/comments | https://api.github.com/repos/huggingface/datasets/issues/2474/events | https://github.com/huggingface/datasets/issues/2474 | 917,622,055 | MDU6SXNzdWU5MTc2MjIwNTU= | 2,474 | cache_dir parameter for load_from_disk ? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 4 | 2021-06-10T17:39:36Z | 2022-02-16T14:55:01Z | 2022-02-16T14:55:00Z | null | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cached to the VM's disk:
`
from datasets import load_from_disk
myPreprocessedData = load_from_disk("/content/gdrive/MyDrive/ASR_data/myPreprocessedData")
`
I know that chaching on google drive could slow down learning. But at least it would run.
**Describe the solution you'd like**
Add cache_Dir parameter to the load_from_disk function.
**Describe alternatives you've considered**
It looks like you could write a custom loading script for the load_dataset function. But this seems to be much too complex for my use case. Is there perhaps a template here that uses the load_from_disk function?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2474/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2474/timeline | null | completed | null | null | false | [
"Hi ! `load_from_disk` doesn't move the data. If you specify a local path to your mounted drive, then the dataset is going to be loaded directly from the arrow file in this directory. The cache files that result from `map` operations are also stored in the same directory by default.\r\n\r\nHowever note than writing data to your google drive actually fills the VM's disk (see https://github.com/huggingface/datasets/issues/643)\r\n\r\nGiven that, I don't think that changing the cache directory changes anything.\r\n\r\nLet me know what you think",
"Thanks for your answer! I am a little surprised since I just want to read the dataset.\r\n\r\nAfter debugging a bit, I noticed that the VM’s disk fills up when the tables (generator) are converted to a list:\r\n\r\nhttps://github.com/huggingface/datasets/blob/5ba149773d23369617563d752aca922081277ec2/src/datasets/table.py#L850\r\n\r\nIf I try to iterate through the table’s generator e.g.: \r\n\r\n`length = sum(1 for x in tables)`\r\n\r\nthe VM’s disk fills up as well.\r\n\r\nI’m running out of Ideas 😄 ",
"Indeed reading the data shouldn't increase the VM's disk. Not sure what google colab does under the hood for that to happen",
"Apparently, Colab uses a local cache of the data files read/written from Google Drive. See:\r\n- https://github.com/googlecolab/colabtools/issues/2087#issuecomment-860818457\r\n- https://github.com/googlecolab/colabtools/issues/1915#issuecomment-804234540\r\n- https://github.com/googlecolab/colabtools/issues/2147#issuecomment-885052636"
] |
https://api.github.com/repos/huggingface/datasets/issues/167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/167/comments | https://api.github.com/repos/huggingface/datasets/issues/167/events | https://github.com/huggingface/datasets/pull/167 | 620,908,786 | MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw | 167 | [Tests] refactor tests | [] | closed | false | null | 1 | 2020-05-19T11:43:32Z | 2020-05-19T16:17:12Z | 2020-05-19T16:17:10Z | null | This PR separates AWS and Local tests to remove these ugly statements in the script:
```python
if "/" not in dataset_name:
logging.info("Skip {} because it is a canonical dataset")
return
```
To run a `aws` test, one should now run the following command:
```python
pytest -s tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14
```
The same `local` test, can be run with:
```python
pytest -s tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/167/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/167.diff",
"html_url": "https://github.com/huggingface/datasets/pull/167",
"merged_at": "2020-05-19T16:17:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/167.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/167"
} | true | [
"Nice !"
] |
https://api.github.com/repos/huggingface/datasets/issues/2327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2327/comments | https://api.github.com/repos/huggingface/datasets/issues/2327/events | https://github.com/huggingface/datasets/issues/2327 | 877,565,831 | MDU6SXNzdWU4Nzc1NjU4MzE= | 2,327 | A syntax error in example | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-05-06T14:34:44Z | 2021-05-20T03:04:19Z | 2021-05-20T03:04:19Z | null | 
Sorry to report with an image, I can't find the template source code of this snippet. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2327/timeline | null | completed | null | null | false | [
"cc @beurkinger but I think this has been fixed internally and will soon be updated right ?",
"This issue has been fixed."
] |
https://api.github.com/repos/huggingface/datasets/issues/1852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1852/comments | https://api.github.com/repos/huggingface/datasets/issues/1852/events | https://github.com/huggingface/datasets/pull/1852 | 804,633,033 | MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1 | 1,852 | Add Arabic Speech Corpus | [] | closed | false | null | 0 | 2021-02-09T15:02:26Z | 2021-02-11T10:18:55Z | 2021-02-11T10:18:55Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1852/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1852",
"merged_at": "2021-02-11T10:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1852"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/1610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1610/comments | https://api.github.com/repos/huggingface/datasets/issues/1610/events | https://github.com/huggingface/datasets/issues/1610 | 771,453,599 | MDU6SXNzdWU3NzE0NTM1OTk= | 1,610 | shuffle does not accept seed | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2020-12-19T20:59:39Z | 2021-01-04T10:00:03Z | 2021-01-04T10:00:03Z | null | Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1610/timeline | null | completed | null | null | false | [
"Hi, did you check the doc on `shuffle`?\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes.html?datasets.Dataset.shuffle#datasets.Dataset.shuffle",
"Hi Thomas\r\nthanks for reponse, yes, I did checked it, but this does not work for me please see \r\n\r\n```\r\n(internship) rkarimi@italix17:/idiap/user/rkarimi/dev$ python \r\nPython 3.7.9 (default, Aug 31 2020, 12:42:55) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import datasets \r\n2020-12-20 01:48:50.766004: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-20 01:48:50.766029: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n>>> data = datasets.load_dataset(\"scitail\", \"snli_format\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nReusing dataset scitail (/idiap/temp/rkarimi/cache_home_1/datasets/scitail/snli_format/1.1.0/fd8ccdfc3134ce86eb4ef10ba7f21ee2a125c946e26bb1dd3625fe74f48d3b90)\r\n>>> data.shuffle(seed=2)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nTypeError: shuffle() got an unexpected keyword argument 'seed'\r\n\r\n```\r\n\r\ndatasets version\r\n`datasets 1.1.2 <pip>\r\n`\r\n",
"Thanks for reporting ! \r\n\r\nIndeed it looks like an issue with `suffle` on `DatasetDict`. We're going to fix that.\r\nIn the meantime you can shuffle each split (train, validation, test) separately:\r\n```python\r\nshuffled_train_dataset = data[\"train\"].shuffle(seed=42)\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3287/comments | https://api.github.com/repos/huggingface/datasets/issues/3287/events | https://github.com/huggingface/datasets/pull/3287 | 1,056,079,724 | PR_kwDODunzps4upsWR | 3,287 | Add The Pile dataset and PubMed Central subset | [] | closed | false | null | 0 | 2021-11-17T12:35:58Z | 2021-12-01T15:29:08Z | 2021-12-01T15:29:07Z | null | Add:
- The complete final version of The Pile dataset: "all" config
- PubMed Central subset of The Pile: "pubmed_central" config
Close #1675, close bigscience-workshop/data_tooling#74.
CC: @StellaAthena, @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3287/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3287",
"merged_at": "2021-12-01T15:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3287"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5501/comments | https://api.github.com/repos/huggingface/datasets/issues/5501/events | https://github.com/huggingface/datasets/pull/5501 | 1,569,644,159 | PR_kwDODunzps5JMTn8 | 5,501 | Increase chunk size for speeding up file downloads | [] | open | false | null | 4 | 2023-02-03T10:50:10Z | 2023-02-09T11:04:11Z | null | null | Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5501/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5501",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5501"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5501). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008407 / 0.011353 (-0.002946) | 0.004651 / 0.011008 (-0.006357) | 0.100367 / 0.038508 (0.061859) | 0.029107 / 0.023109 (0.005998) | 0.302798 / 0.275898 (0.026900) | 0.354379 / 0.323480 (0.030899) | 0.006985 / 0.007986 (-0.001001) | 0.003365 / 0.004328 (-0.000963) | 0.078312 / 0.004250 (0.074062) | 0.034205 / 0.037052 (-0.002847) | 0.310431 / 0.258489 (0.051941) | 0.346239 / 0.293841 (0.052398) | 0.033800 / 0.128546 (-0.094747) | 0.011515 / 0.075646 (-0.064131) | 0.323588 / 0.419271 (-0.095684) | 0.040766 / 0.043533 (-0.002767) | 0.300914 / 0.255139 (0.045775) | 0.332983 / 0.283200 (0.049784) | 0.087500 / 0.141683 (-0.054182) | 1.469505 / 1.452155 (0.017350) | 1.505119 / 1.492716 (0.012403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187319 / 0.018006 (0.169313) | 0.405498 / 0.000490 (0.405008) | 0.001000 / 0.000200 (0.000800) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022583 / 0.037411 (-0.014828) | 0.098096 / 0.014526 (0.083570) | 0.104272 / 0.176557 (-0.072284) | 0.142801 / 0.737135 (-0.594335) | 0.109749 / 0.296338 (-0.186590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423343 / 0.215209 (0.208134) | 4.215116 / 2.077655 (2.137461) | 1.899714 / 1.504120 (0.395594) | 1.689579 / 1.541195 (0.148384) | 1.710292 / 1.468490 (0.241801) | 0.690976 / 4.584777 (-3.893801) | 3.432501 / 3.745712 (-0.313212) | 1.899600 / 5.269862 (-3.370261) | 1.279801 / 4.565676 (-3.285876) | 0.082763 / 0.424275 (-0.341512) | 0.012545 / 0.007607 (0.004938) | 0.531381 / 0.226044 (0.305336) | 5.320077 / 2.268929 (3.051148) | 2.370705 / 55.444624 (-53.073919) | 2.007089 / 6.876477 (-4.869388) | 2.062412 / 2.142072 (-0.079661) | 0.814998 / 4.805227 (-3.990229) | 0.149822 / 6.500664 (-6.350842) | 0.064399 / 0.075469 (-0.011070) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226196 / 1.841788 (-0.615591) | 13.823443 / 8.074308 (5.749134) | 13.813667 / 10.191392 (3.622275) | 0.161289 / 0.680424 (-0.519135) | 0.028569 / 0.534201 (-0.505632) | 0.390360 / 0.579283 (-0.188923) | 0.396217 / 0.434364 (-0.038147) | 0.483120 / 0.540337 (-0.057217) | 0.570041 / 1.386936 (-0.816895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006422 / 0.011353 (-0.004931) | 0.004528 / 0.011008 (-0.006481) | 0.076043 / 0.038508 (0.037535) | 0.027631 / 0.023109 (0.004522) | 0.340622 / 0.275898 (0.064724) | 0.376694 / 0.323480 (0.053214) | 0.004993 / 0.007986 (-0.002992) | 0.003403 / 0.004328 (-0.000926) | 0.074521 / 0.004250 (0.070270) | 0.037568 / 0.037052 (0.000516) | 0.343423 / 0.258489 (0.084934) | 0.387729 / 0.293841 (0.093888) | 0.031790 / 0.128546 (-0.096757) | 0.011767 / 0.075646 (-0.063879) | 0.085182 / 0.419271 (-0.334090) | 0.042867 / 0.043533 (-0.000666) | 0.341269 / 0.255139 (0.086130) | 0.368460 / 0.283200 (0.085261) | 0.090153 / 0.141683 (-0.051530) | 1.536490 / 1.452155 (0.084335) | 1.596403 / 1.492716 (0.103686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222373 / 0.018006 (0.204367) | 0.396145 / 0.000490 (0.395655) | 0.000384 / 0.000200 (0.000184) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024801 / 0.037411 (-0.012610) | 0.099711 / 0.014526 (0.085185) | 0.106094 / 0.176557 (-0.070463) | 0.147819 / 0.737135 (-0.589316) | 0.110065 / 0.296338 (-0.186274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442863 / 0.215209 (0.227654) | 4.420043 / 2.077655 (2.342388) | 2.070136 / 1.504120 (0.566016) | 1.862363 / 1.541195 (0.321168) | 1.910890 / 1.468490 (0.442400) | 0.702570 / 4.584777 (-3.882207) | 3.435855 / 3.745712 (-0.309857) | 1.871290 / 5.269862 (-3.398572) | 1.169321 / 4.565676 (-3.396355) | 0.083674 / 0.424275 (-0.340601) | 0.012823 / 0.007607 (0.005216) | 0.539330 / 0.226044 (0.313285) | 5.403317 / 2.268929 (3.134389) | 2.536508 / 55.444624 (-52.908117) | 2.179629 / 6.876477 (-4.696847) | 2.207586 / 2.142072 (0.065514) | 0.812256 / 4.805227 (-3.992972) | 0.152915 / 6.500664 (-6.347749) | 0.068431 / 0.075469 (-0.007038) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294982 / 1.841788 (-0.546806) | 13.912811 / 8.074308 (5.838503) | 13.415658 / 10.191392 (3.224266) | 0.149531 / 0.680424 (-0.530893) | 0.016785 / 0.534201 (-0.517416) | 0.381055 / 0.579283 (-0.198228) | 0.392084 / 0.434364 (-0.042280) | 0.472614 / 0.540337 (-0.067724) | 0.559799 / 1.386936 (-0.827137) |\n\n</details>\n</details>\n\n\n",
"We simply do GET requests to hf.co to download files from the Hub right now. We may switch to hfh when we update how we do caching \r\n\r\nYou can try on any dataset hosted on the hub like `imagenet-1k`",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010931 / 0.011353 (-0.000422) | 0.005730 / 0.011008 (-0.005278) | 0.116653 / 0.038508 (0.078145) | 0.041439 / 0.023109 (0.018330) | 0.359559 / 0.275898 (0.083661) | 0.408398 / 0.323480 (0.084918) | 0.009193 / 0.007986 (0.001208) | 0.006024 / 0.004328 (0.001695) | 0.087743 / 0.004250 (0.083492) | 0.048636 / 0.037052 (0.011584) | 0.363133 / 0.258489 (0.104643) | 0.407144 / 0.293841 (0.113303) | 0.044610 / 0.128546 (-0.083936) | 0.014075 / 0.075646 (-0.061571) | 0.396506 / 0.419271 (-0.022766) | 0.057014 / 0.043533 (0.013482) | 0.358254 / 0.255139 (0.103115) | 0.399887 / 0.283200 (0.116687) | 0.115337 / 0.141683 (-0.026346) | 1.731655 / 1.452155 (0.279500) | 1.813276 / 1.492716 (0.320560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210197 / 0.018006 (0.192191) | 0.475887 / 0.000490 (0.475397) | 0.003323 / 0.000200 (0.003123) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031686 / 0.037411 (-0.005725) | 0.131167 / 0.014526 (0.116641) | 0.137919 / 0.176557 (-0.038637) | 0.184843 / 0.737135 (-0.552293) | 0.144998 / 0.296338 (-0.151340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471371 / 0.215209 (0.256162) | 4.693739 / 2.077655 (2.616084) | 2.251567 / 1.504120 (0.747447) | 1.993653 / 1.541195 (0.452458) | 2.053236 / 1.468490 (0.584746) | 0.809226 / 4.584777 (-3.775551) | 4.494120 / 3.745712 (0.748408) | 2.436921 / 5.269862 (-2.832940) | 1.541973 / 4.565676 (-3.023704) | 0.098401 / 0.424275 (-0.325874) | 0.014329 / 0.007607 (0.006722) | 0.597813 / 0.226044 (0.371769) | 5.964035 / 2.268929 (3.695107) | 2.709283 / 55.444624 (-52.735341) | 2.323537 / 6.876477 (-4.552940) | 2.401707 / 2.142072 (0.259635) | 0.976379 / 4.805227 (-3.828848) | 0.194638 / 6.500664 (-6.306026) | 0.076904 / 0.075469 (0.001435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516877 / 1.841788 (-0.324911) | 18.228010 / 8.074308 (10.153702) | 16.631750 / 10.191392 (6.440358) | 0.176030 / 0.680424 (-0.504394) | 0.033769 / 0.534201 (-0.500432) | 0.520511 / 0.579283 (-0.058773) | 0.531764 / 0.434364 (0.097400) | 0.648658 / 0.540337 (0.108321) | 0.779124 / 1.386936 (-0.607812) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002718) | 0.005785 / 0.011008 (-0.005223) | 0.087042 / 0.038508 (0.048534) | 0.039632 / 0.023109 (0.016523) | 0.419719 / 0.275898 (0.143821) | 0.463860 / 0.323480 (0.140380) | 0.006621 / 0.007986 (-0.001364) | 0.004655 / 0.004328 (0.000327) | 0.087003 / 0.004250 (0.082753) | 0.057122 / 0.037052 (0.020069) | 0.417820 / 0.258489 (0.159331) | 0.485981 / 0.293841 (0.192140) | 0.042606 / 0.128546 (-0.085940) | 0.014369 / 0.075646 (-0.061278) | 0.101939 / 0.419271 (-0.317333) | 0.058303 / 0.043533 (0.014770) | 0.415053 / 0.255139 (0.159914) | 0.439914 / 0.283200 (0.156714) | 0.134628 / 0.141683 (-0.007055) | 1.765464 / 1.452155 (0.313309) | 1.843963 / 1.492716 (0.351247) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307156 / 0.018006 (0.289150) | 0.476657 / 0.000490 (0.476167) | 0.019718 / 0.000200 (0.019518) | 0.000160 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035286 / 0.037411 (-0.002125) | 0.138094 / 0.014526 (0.123568) | 0.144768 / 0.176557 (-0.031789) | 0.191386 / 0.737135 (-0.545750) | 0.151988 / 0.296338 (-0.144350) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504733 / 0.215209 (0.289523) | 5.027048 / 2.077655 (2.949394) | 2.441571 / 1.504120 (0.937451) | 2.198242 / 1.541195 (0.657047) | 2.298473 / 1.468490 (0.829983) | 0.848048 / 4.584777 (-3.736729) | 4.613102 / 3.745712 (0.867390) | 2.522824 / 5.269862 (-2.747037) | 1.610159 / 4.565676 (-2.955517) | 0.105197 / 0.424275 (-0.319078) | 0.015195 / 0.007607 (0.007588) | 0.626976 / 0.226044 (0.400932) | 6.268459 / 2.268929 (3.999530) | 3.014387 / 55.444624 (-52.430237) | 2.554102 / 6.876477 (-4.322375) | 2.656051 / 2.142072 (0.513979) | 1.027978 / 4.805227 (-3.777249) | 0.200686 / 6.500664 (-6.299978) | 0.077104 / 0.075469 (0.001635) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.485228 / 1.841788 (-0.356560) | 18.319949 / 8.074308 (10.245641) | 15.855739 / 10.191392 (5.664347) | 0.204365 / 0.680424 (-0.476059) | 0.023824 / 0.534201 (-0.510377) | 0.505000 / 0.579283 (-0.074283) | 0.502866 / 0.434364 (0.068502) | 0.629574 / 0.540337 (0.089237) | 0.746602 / 1.386936 (-0.640334) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3233/comments | https://api.github.com/repos/huggingface/datasets/issues/3233/events | https://github.com/huggingface/datasets/pull/3233 | 1,047,474,931 | PR_kwDODunzps4uOl9- | 3,233 | Improve repository structure docs | [] | closed | false | null | 0 | 2021-11-08T13:51:35Z | 2021-11-09T10:02:18Z | 2021-11-09T10:02:17Z | null | Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3233/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3233",
"merged_at": "2021-11-09T10:02:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3233"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3924/comments | https://api.github.com/repos/huggingface/datasets/issues/3924/events | https://github.com/huggingface/datasets/pull/3924 | 1,169,805,813 | PR_kwDODunzps40eED5 | 3,924 | Document cases for github datasets | [] | closed | false | null | 2 | 2022-03-15T15:10:10Z | 2022-04-05T18:33:15Z | 2022-03-15T15:41:23Z | null | In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases.
I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github:
- when you need the dataset to be reviewed
- when you need long-term maintenance from the HF team
- when there’s no clear org name / namespace that you can put the dataset under | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3924/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3924.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3924",
"merged_at": "2022-03-15T15:41:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3924.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3924"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.",
"Yay!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3798/comments | https://api.github.com/repos/huggingface/datasets/issues/3798/events | https://github.com/huggingface/datasets/pull/3798 | 1,154,411,066 | PR_kwDODunzps4zrl5Y | 3,798 | Fix error message in CSV loader for newer Pandas versions | [] | closed | false | null | 0 | 2022-02-28T18:24:10Z | 2022-02-28T18:51:39Z | 2022-02-28T18:51:38Z | null | Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this:
```python
csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f
```
CC: @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3798/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3798",
"merged_at": "2022-02-28T18:51:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3798"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3395/comments | https://api.github.com/repos/huggingface/datasets/issues/3395/events | https://github.com/huggingface/datasets/pull/3395 | 1,073,432,650 | PR_kwDODunzps4vgTKG | 3,395 | Fix formatting in IterableDataset.map docs | [] | closed | false | null | 0 | 2021-12-07T14:41:01Z | 2021-12-08T10:11:33Z | 2021-12-08T10:11:33Z | null | Fix formatting in the recently added `Map` section of the streaming docs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3395/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3395.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3395",
"merged_at": "2021-12-08T10:11:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3395.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3395"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1538/comments | https://api.github.com/repos/huggingface/datasets/issues/1538/events | https://github.com/huggingface/datasets/pull/1538 | 765,139,739 | MDExOlB1bGxSZXF1ZXN0NTM4ODkxOTE3 | 1,538 | tweets_hate_speech_detection | [] | closed | false | null | 3 | 2020-12-13T07:37:53Z | 2020-12-21T15:54:28Z | 2020-12-21T15:54:27Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1538/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1538.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1538",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1538.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1538"
} | true | [
"Hi @lhoestq I have added this new dataset for tweet's hate speech detection. \r\n\r\nPlease if u could review it. \r\n\r\nThank you",
"Hi @darshan-gandhi have you add a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me when you're ready for the final review",
"Closing in favor of #1607"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/5592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5592/comments | https://api.github.com/repos/huggingface/datasets/issues/5592/events | https://github.com/huggingface/datasets/pull/5592 | 1,603,619,124 | PR_kwDODunzps5K9dWr | 5,592 | Fix docstring example | [] | closed | false | null | 2 | 2023-02-28T18:42:37Z | 2023-02-28T19:26:33Z | 2023-02-28T19:19:15Z | null | Fixes #5581 to use the correct output for the `set_format` method. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5592/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5592",
"merged_at": "2023-02-28T19:19:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5592"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009526 / 0.011353 (-0.001827) | 0.005132 / 0.011008 (-0.005876) | 0.101312 / 0.038508 (0.062804) | 0.035703 / 0.023109 (0.012594) | 0.301788 / 0.275898 (0.025890) | 0.368411 / 0.323480 (0.044932) | 0.008163 / 0.007986 (0.000177) | 0.005462 / 0.004328 (0.001134) | 0.077282 / 0.004250 (0.073031) | 0.044139 / 0.037052 (0.007086) | 0.312280 / 0.258489 (0.053791) | 0.351870 / 0.293841 (0.058029) | 0.038266 / 0.128546 (-0.090281) | 0.012051 / 0.075646 (-0.063595) | 0.335109 / 0.419271 (-0.084163) | 0.047596 / 0.043533 (0.004064) | 0.300931 / 0.255139 (0.045792) | 0.325705 / 0.283200 (0.042505) | 0.100472 / 0.141683 (-0.041211) | 1.475037 / 1.452155 (0.022882) | 1.520059 / 1.492716 (0.027343) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211096 / 0.018006 (0.193089) | 0.442988 / 0.000490 (0.442498) | 0.003644 / 0.000200 (0.003444) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027492 / 0.037411 (-0.009919) | 0.108981 / 0.014526 (0.094455) | 0.117836 / 0.176557 (-0.058720) | 0.161220 / 0.737135 (-0.575915) | 0.124765 / 0.296338 (-0.171574) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413480 / 0.215209 (0.198271) | 4.111355 / 2.077655 (2.033700) | 1.933024 / 1.504120 (0.428904) | 1.727467 / 1.541195 (0.186272) | 1.827106 / 1.468490 (0.358616) | 0.688209 / 4.584777 (-3.896568) | 3.759672 / 3.745712 (0.013960) | 2.163806 / 5.269862 (-3.106056) | 1.473521 / 4.565676 (-3.092155) | 0.082859 / 0.424275 (-0.341416) | 0.012320 / 0.007607 (0.004713) | 0.515321 / 0.226044 (0.289277) | 5.158651 / 2.268929 (2.889722) | 2.489123 / 55.444624 (-52.955501) | 2.218910 / 6.876477 (-4.657566) | 2.257306 / 2.142072 (0.115233) | 0.861477 / 4.805227 (-3.943750) | 0.165857 / 6.500664 (-6.334807) | 0.063723 / 0.075469 (-0.011746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195163 / 1.841788 (-0.646625) | 14.954518 / 8.074308 (6.880210) | 14.272289 / 10.191392 (4.080897) | 0.167420 / 0.680424 (-0.513004) | 0.028907 / 0.534201 (-0.505294) | 0.450117 / 0.579283 (-0.129166) | 0.448532 / 0.434364 (0.014168) | 0.534406 / 0.540337 (-0.005931) | 0.633468 / 1.386936 (-0.753468) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007658 / 0.011353 (-0.003694) | 0.005266 / 0.011008 (-0.005742) | 0.075293 / 0.038508 (0.036785) | 0.034442 / 0.023109 (0.011333) | 0.346558 / 0.275898 (0.070660) | 0.391496 / 0.323480 (0.068017) | 0.005852 / 0.007986 (-0.002133) | 0.004121 / 0.004328 (-0.000207) | 0.074254 / 0.004250 (0.070004) | 0.048361 / 0.037052 (0.011309) | 0.344613 / 0.258489 (0.086124) | 0.401497 / 0.293841 (0.107656) | 0.037243 / 0.128546 (-0.091303) | 0.012505 / 0.075646 (-0.063142) | 0.087188 / 0.419271 (-0.332084) | 0.050114 / 0.043533 (0.006581) | 0.340454 / 0.255139 (0.085315) | 0.361087 / 0.283200 (0.077887) | 0.104692 / 0.141683 (-0.036991) | 1.419432 / 1.452155 (-0.032722) | 1.524709 / 1.492716 (0.031993) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231820 / 0.018006 (0.213814) | 0.445791 / 0.000490 (0.445301) | 0.000442 / 0.000200 (0.000242) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030445 / 0.037411 (-0.006967) | 0.111183 / 0.014526 (0.096657) | 0.123494 / 0.176557 (-0.053063) | 0.173121 / 0.737135 (-0.564014) | 0.124968 / 0.296338 (-0.171371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428854 / 0.215209 (0.213645) | 4.270262 / 2.077655 (2.192608) | 2.012075 / 1.504120 (0.507955) | 1.826564 / 1.541195 (0.285370) | 1.931699 / 1.468490 (0.463209) | 0.728762 / 4.584777 (-3.856015) | 3.879640 / 3.745712 (0.133928) | 3.325715 / 5.269862 (-1.944147) | 1.818573 / 4.565676 (-2.747104) | 0.087879 / 0.424275 (-0.336396) | 0.012530 / 0.007607 (0.004923) | 0.530249 / 0.226044 (0.304204) | 5.286110 / 2.268929 (3.017181) | 2.566649 / 55.444624 (-52.877975) | 2.210162 / 6.876477 (-4.666315) | 2.297562 / 2.142072 (0.155490) | 0.906161 / 4.805227 (-3.899066) | 0.171914 / 6.500664 (-6.328750) | 0.064182 / 0.075469 (-0.011287) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285781 / 1.841788 (-0.556006) | 16.159072 / 8.074308 (8.084763) | 14.087492 / 10.191392 (3.896100) | 0.148789 / 0.680424 (-0.531635) | 0.018078 / 0.534201 (-0.516123) | 0.427748 / 0.579283 (-0.151535) | 0.447079 / 0.434364 (0.012715) | 0.535917 / 0.540337 (-0.004421) | 0.627491 / 1.386936 (-0.759445) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/883/comments | https://api.github.com/repos/huggingface/datasets/issues/883/events | https://github.com/huggingface/datasets/issues/883 | 749,750,801 | MDU6SXNzdWU3NDk3NTA4MDE= | 883 | Downloading/caching only a part of a datasets' dataset. | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | 3 | 2020-11-24T14:25:18Z | 2020-11-27T13:51:55Z | null | null | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/883/timeline | null | null | null | null | false | [
"Not at the moment but we could likely support this feature.",
"?",
"I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources."
] |
https://api.github.com/repos/huggingface/datasets/issues/5550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5550/comments | https://api.github.com/repos/huggingface/datasets/issues/5550/events | https://github.com/huggingface/datasets/pull/5550 | 1,591,409,475 | PR_kwDODunzps5KUl5i | 5,550 | Resolve four broken refs in the docs | [] | closed | false | null | 3 | 2023-02-20T08:52:11Z | 2023-02-20T15:16:13Z | 2023-02-20T15:09:13Z | null | Hello!
## Pull Request overview
* Resolve 4 broken references in the docs
## The problems
Two broken references [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.class_encode_column):

---
One broken reference [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.unique):

---
One missing reference [here](https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.DatasetDict.class_encode_column):

- Tom Aarsen | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5550/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5550/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5550",
"merged_at": "2023-02-20T15:09:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5550"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"See the resolved changes [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.Dataset.class_encode_column), [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.Dataset.unique) and [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.DatasetDict.class_encode_column), respectively",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008256 / 0.011353 (-0.003097) | 0.004400 / 0.011008 (-0.006608) | 0.098676 / 0.038508 (0.060168) | 0.028937 / 0.023109 (0.005828) | 0.302578 / 0.275898 (0.026680) | 0.334170 / 0.323480 (0.010690) | 0.006657 / 0.007986 (-0.001329) | 0.004581 / 0.004328 (0.000253) | 0.076874 / 0.004250 (0.072624) | 0.034401 / 0.037052 (-0.002652) | 0.303928 / 0.258489 (0.045439) | 0.348421 / 0.293841 (0.054580) | 0.033303 / 0.128546 (-0.095243) | 0.011445 / 0.075646 (-0.064202) | 0.322137 / 0.419271 (-0.097135) | 0.041072 / 0.043533 (-0.002461) | 0.306007 / 0.255139 (0.050868) | 0.325945 / 0.283200 (0.042745) | 0.086685 / 0.141683 (-0.054998) | 1.454956 / 1.452155 (0.002801) | 1.545525 / 1.492716 (0.052809) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.175536 / 0.018006 (0.157530) | 0.400203 / 0.000490 (0.399713) | 0.002103 / 0.000200 (0.001903) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022750 / 0.037411 (-0.014661) | 0.095163 / 0.014526 (0.080637) | 0.103995 / 0.176557 (-0.072561) | 0.138806 / 0.737135 (-0.598330) | 0.105711 / 0.296338 (-0.190628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427860 / 0.215209 (0.212651) | 4.259594 / 2.077655 (2.181940) | 2.157986 / 1.504120 (0.653866) | 1.913814 / 1.541195 (0.372619) | 1.793455 / 1.468490 (0.324965) | 0.702341 / 4.584777 (-3.882436) | 3.353086 / 3.745712 (-0.392626) | 1.856952 / 5.269862 (-3.412909) | 1.149963 / 4.565676 (-3.415713) | 0.082926 / 0.424275 (-0.341349) | 0.012307 / 0.007607 (0.004700) | 0.524531 / 0.226044 (0.298487) | 5.254766 / 2.268929 (2.985838) | 2.590157 / 55.444624 (-52.854468) | 2.272613 / 6.876477 (-4.603864) | 2.304367 / 2.142072 (0.162294) | 0.819298 / 4.805227 (-3.985929) | 0.152170 / 6.500664 (-6.348494) | 0.066563 / 0.075469 (-0.008906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.205054 / 1.841788 (-0.636733) | 13.729073 / 8.074308 (5.654765) | 14.061037 / 10.191392 (3.869645) | 0.138020 / 0.680424 (-0.542404) | 0.028042 / 0.534201 (-0.506159) | 0.392260 / 0.579283 (-0.187024) | 0.405632 / 0.434364 (-0.028732) | 0.469583 / 0.540337 (-0.070755) | 0.563110 / 1.386936 (-0.823826) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006513 / 0.011353 (-0.004839) | 0.004402 / 0.011008 (-0.006606) | 0.076339 / 0.038508 (0.037831) | 0.027222 / 0.023109 (0.004112) | 0.338968 / 0.275898 (0.063070) | 0.378475 / 0.323480 (0.054995) | 0.005443 / 0.007986 (-0.002542) | 0.003312 / 0.004328 (-0.001016) | 0.075352 / 0.004250 (0.071102) | 0.034951 / 0.037052 (-0.002102) | 0.342268 / 0.258489 (0.083779) | 0.381024 / 0.293841 (0.087183) | 0.031568 / 0.128546 (-0.096979) | 0.011558 / 0.075646 (-0.064088) | 0.085267 / 0.419271 (-0.334005) | 0.041248 / 0.043533 (-0.002284) | 0.340422 / 0.255139 (0.085283) | 0.365497 / 0.283200 (0.082297) | 0.088278 / 0.141683 (-0.053405) | 1.479838 / 1.452155 (0.027683) | 1.554440 / 1.492716 (0.061724) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223240 / 0.018006 (0.205234) | 0.394771 / 0.000490 (0.394282) | 0.003022 / 0.000200 (0.002822) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024842 / 0.037411 (-0.012570) | 0.099167 / 0.014526 (0.084641) | 0.106376 / 0.176557 (-0.070180) | 0.141397 / 0.737135 (-0.595738) | 0.110355 / 0.296338 (-0.185983) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437598 / 0.215209 (0.222389) | 4.394964 / 2.077655 (2.317310) | 2.082660 / 1.504120 (0.578540) | 1.868690 / 1.541195 (0.327496) | 1.915190 / 1.468490 (0.446700) | 0.701035 / 4.584777 (-3.883742) | 3.306594 / 3.745712 (-0.439118) | 1.842681 / 5.269862 (-3.427181) | 1.155022 / 4.565676 (-3.410654) | 0.083310 / 0.424275 (-0.340965) | 0.012413 / 0.007607 (0.004806) | 0.543179 / 0.226044 (0.317135) | 5.445605 / 2.268929 (3.176676) | 2.545080 / 55.444624 (-52.899544) | 2.188741 / 6.876477 (-4.687736) | 2.205561 / 2.142072 (0.063489) | 0.804967 / 4.805227 (-4.000261) | 0.151024 / 6.500664 (-6.349640) | 0.066448 / 0.075469 (-0.009021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304671 / 1.841788 (-0.537117) | 13.996631 / 8.074308 (5.922323) | 13.617626 / 10.191392 (3.426234) | 0.141512 / 0.680424 (-0.538912) | 0.016527 / 0.534201 (-0.517674) | 0.384981 / 0.579283 (-0.194302) | 0.385198 / 0.434364 (-0.049166) | 0.469033 / 0.540337 (-0.071305) | 0.554738 / 1.386936 (-0.832198) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/780/comments | https://api.github.com/repos/huggingface/datasets/issues/780/events | https://github.com/huggingface/datasets/pull/780 | 732,738,647 | MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0 | 780 | Add ASNQ dataset | [] | closed | false | null | 4 | 2020-10-29T23:31:56Z | 2020-11-10T09:26:23Z | 2020-11-10T09:26:23Z | null | This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118
The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti.
_Please note that I have no affiliation with the authors._
Repo: https://github.com/alexa/wqa_tanda
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/780/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/780",
"merged_at": "2020-11-10T09:26:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/780"
} | true | [
"Very nice !\r\nWhat do the `sentence1` and `sentence2` correspond to exactly ?\r\nAlso maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)",
"> What do the `sentence1` and `sentence2` correspond to exactly ?\r\n\r\n`sentence1` is a question, and `sentence2` is a candidate answer sentence. The labels are [1, 2, 3, 4] defining a relation between the answer sentence and the question. For example, label 4 means that the answer sentence is inside the _long_answer_ passage AND that the _short_answer_ is within the answer sentence. All the other labels are the negatives with different characteristics. (the short_answer, long_answer terminology is borrowed from Google's NQ dataset)\r\n\r\nShould I label them simply as `question` and `answer`? I was going more with what I saw in the examples/run_glue.py script, but I realize now there is no restriction around this.\r\n\r\n> Also maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)\r\n\r\nI am finding it difficult to assign names to each class, but perhaps it's possible. Here's the description of each class from the paper.\r\n\r\n1. Sentences from the document that are in the long answer but do not contain the annotated short answers. It is possible that these sentences might contain the short answer.\r\n2. Sentences from the document that are not in the long answer but contain the short answer string, that is, such occurrence is purely accidental.\r\n3. Sentences from the document that are neither in the long answer nor contain the short answer.\r\n4. Sentences from the document that are in the long answer and do contain the annotated short answers.\r\n\r\nAny ideas?\r\n\r\n",
"Yes it's better to have explicit feature names. Maybe go with question/answer or question/sentence.\r\nI read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\nWe could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?",
"> Yes it's better to have explicit feature names. Maybe go with question/answer or question/sentence.\r\n> I read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\n> We could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?\r\n\r\nOk, sounds good. I went with `sentence` to keep it consistent with `short_answer_in_sentence` and `sentence_in_long_answer`. \r\n\r\nI changed it to a ClassLabel with pos and neg classes and added the two above as features. Let me know if this is not what you had in mind.\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3649/comments | https://api.github.com/repos/huggingface/datasets/issues/3649/events | https://github.com/huggingface/datasets/issues/3649 | 1,117,502,250 | I_kwDODunzps5Cm7sq | 3,649 | Add IGLUE dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "19E633",
"default": false,
"description": "Multimodal datasets",
"id": 3608944167,
"name": "multimodal",
"node_id": "LA_kwDODunzps7XHB4n",
"url": "https://api.github.com/repos/huggingface/datasets/labels/multimodal"
}
] | open | false | null | 0 | 2022-01-28T14:59:41Z | 2022-01-28T15:02:35Z | null | null | ## Adding a Dataset
- **Name:** IGLUE
- **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w))
- **Paper:** https://arxiv.org/abs/2201.11732
- **Data:** https://github.com/e-bug/iglue
- **Motivation:** This dataset would provide a nice example of combining the text and image features of `datasets` together for multimodal applications.
Note: the data / code are not yet visible on the GitHub repo, so I've pinged the authors for more information.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3649/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3649/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/359/comments | https://api.github.com/repos/huggingface/datasets/issues/359/events | https://github.com/huggingface/datasets/issues/359 | 653,656,279 | MDU6SXNzdWU2NTM2NTYyNzk= | 359 | ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | [] | closed | false | null | 4 | 2020-07-08T23:24:05Z | 2020-07-10T14:52:06Z | 2020-07-10T14:52:06Z | null | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <module>
55 from nlp import load_dataset
56
---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles)
58
59
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
736 schema_dict[field.name] = Value(str(field.type))
737
--> 738 parse_schema(writer.schema, features)
739 self.info.features = Features(features)
740
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)
734 parse_schema(field.type.value_type, schema_dict[field.name])
735 else:
--> 736 schema_dict[field.name] = Value(str(field.type))
737
738 parse_schema(writer.schema, features)
<string> in __init__(self, dtype, id, _type)
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)
55
56 def __post_init__(self):
---> 57 self.pa_type = string_to_arrow(self.dtype)
58
59 def __call__(self):
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)
32 if str(type_str + "_") not in pa.__dict__:
33 raise ValueError(
---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. "
35 f"Please make sure to use a correct data type, see: "
36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions"
ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/359/timeline | null | completed | null | null | false | [
"Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", data_files=rel_datafiles)\r\n```",
"The behavior I'm seeing is from the `json` script. \r\nI hacked this together to overcome the error with the `JSON` dataloader\r\n\r\n```\r\nclass DatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n # this is where the error is coming from\r\n # def parse_schema(schema, schema_dict):\r\n # for field in schema:\r\n # if pa.types.is_struct(field.type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type, schema_dict[field.name])\r\n # elif pa.types.is_list(field.type) and pa.types.is_struct(field.type.value_type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type.value_type, schema_dict[field.name])\r\n # else:\r\n # schema_dict[field.name] = Value(str(field.type))\r\n # \r\n # parse_schema(writer.schema, features)\r\n # self.info.features = Features(features)\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n```\r\n\r\nSo I basically just don't populate the `self.info.features` though this doesn't seem to cause any problems in my downstream applications. \r\n\r\nThe other workaround I was doing was to just use pyarrow.json to build a table and then to create the Dataset with its constructor or from_table methods. `load_dataset` has nice split logic, so I'd prefer to use that.\r\n\r\n",
"Also noticed that if you for example in a loader script\r\n\r\n```\r\nfrom nlp import ArrowBasedBuilder\r\n\r\nclass MyBuilder(ArrowBasedBuilder):\r\n...\r\n\r\n```\r\nand use that in the subclass, it will be on the module's __dict__ and will be selected before the `MyBuilder` subclass, and it will raise `NotImplementedError` on its `_generate_examples` method... In the code it check for abstract classes but Builder and ArrowBasedBuilder aren't abstract classes, they're regular classes with `@abstract_methods`.",
"Indeed this is part of a more general limitation which is the fact that we should generate and update the `features` from the auto-inferred Arrow schema when they are not provided (also happen when a user change the schema using `map()`, the features should be auto-generated and guessed as much as possible to keep the `features` synced with the underlying Arrow table schema).\r\n\r\nWe will try to solve this soon."
] |
https://api.github.com/repos/huggingface/datasets/issues/2717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2717/comments | https://api.github.com/repos/huggingface/datasets/issues/2717/events | https://github.com/huggingface/datasets/pull/2717 | 952,979,976 | MDExOlB1bGxSZXF1ZXN0Njk3MDkzNDEx | 2,717 | Fix shuffle on IterableDataset that disables batching in case any functions were mapped | [] | closed | false | null | 0 | 2021-07-26T14:42:22Z | 2021-07-26T18:04:14Z | 2021-07-26T16:30:06Z | null | Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call.
As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable`
Fix #2716. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2717/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2717.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2717",
"merged_at": "2021-07-26T16:30:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2717.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2717"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1114/comments | https://api.github.com/repos/huggingface/datasets/issues/1114/events | https://github.com/huggingface/datasets/pull/1114 | 757,123,638 | MDExOlB1bGxSZXF1ZXN0NTMyNTUyMjE1 | 1,114 | Add sesotho ner corpus | [] | closed | false | null | 0 | 2020-12-04T13:59:41Z | 2020-12-04T15:02:07Z | 2020-12-04T15:02:07Z | null | Clean Sesotho PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1114/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1114.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1114",
"merged_at": "2020-12-04T15:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1114.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1114"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4831/comments | https://api.github.com/repos/huggingface/datasets/issues/4831/events | https://github.com/huggingface/datasets/pull/4831 | 1,336,199,643 | PR_kwDODunzps49Cibf | 4,831 | Add oversampling strategies to interleave datasets | [] | closed | false | null | 5 | 2022-08-11T16:24:51Z | 2023-07-11T15:57:48Z | 2022-08-24T16:46:07Z | null | Hello everyone,
Here is a proposal to improve `interleave_datasets` function.
Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list.
I have myself encountered this problem while trying to implement training on a multilingual dataset following a training strategy similar to that of [XLSUM paper](https://arxiv.org/pdf/2106.13822.pdf), a multilingual abstract summary dataset where the multilingual training dataset is created by sampling from the languages following a smoothing strategy. The main idea is to sample languages that have a low number of samples more frequently than other languages.
As in Issue #3064, the current default strategy is a undersampling strategy, which stops as soon as a dataset runs out of samples. The new `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
How does it work in practice:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
- In the other cases, it is supposed to keep the same behaviour as before. Except that this time, when probabilities are precised, it really stops AS SOON AS a dataset is out of samples.
More on the last sentence:
The previous example of `interleave_datasets` was:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12]
With my implementation, `dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)` gives:
>>> dataset["a"]
[10, 0, 11, 1, 2]
because `d1` is already out of samples just after `2` is added.
Example of the results of applying the different strategies:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]
**Final note:** I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4831/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4831.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4831",
"merged_at": "2022-08-24T16:46:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4831.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4831"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4831). All of your documentation changes will be reflected on that endpoint.",
"Hi @lhoestq, \r\nThanks for your review! I've added the requested mention in the documentation and corrected the Error type in `interleave_datasets`. \r\nI've also added test cases in `test_arrow_dataset.py`, which was useful since it allow me to detect an error in the case of an oversampling strategy with no sampling probabilities. \r\nCould you double check this part ? I've commented the code to explain the approach.\r\nThanks!\r\n",
"@ylacombe Thanks for your effort!\r\n\r\n> Final note: I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.\r\n\r\nMay I ask why is that, and how to solve it? In some scenarios, such as domain adaptation with limited resources, it is normal to have a big generic dataset and a small in-domain dataset.\r\n\r\nHere is an example with data sizes 8:2 and oversampling ratios 0.2:0.8\r\n\r\n```python\r\nfrom datasets import Dataset, interleave_datasets\r\n\r\nd1 = Dataset.from_dict({\"a\": [1, 2, 3, 4, 5, 6, 7, 8]})\r\nd2 = Dataset.from_dict({\"a\": [9, 10]})\r\n\r\nnew_d = interleave_datasets([d1, d2], probabilities=[0.2, 0.8], seed=42, stopping_strategy=\"all_exhausted\")\r\nprint(len(new_d))\r\nprint(new_d[\"a\"])\r\n```\r\n\r\n> 37\r\n> [9, 10, 9, 10, 1, 9, 10, 9, 2, 10, 9, 10, 9, 10, 9, 10, 9, 3, 10, 9, 10, 9, 10, 9, 10, 4, 9, 5, 6, 10, 9, 10, 9, 10, 9, 7, 8]\r\n\r\nThe ratios sampled from the two original datasets to the output dataset are correct. However, the length of the output dataset is 37, which is too big. I think it should be only large enough to make the smaller dataset similar in size to the bigger dataset. Any solution for this? Many thanks!\r\n\r\n",
"Hi @ymoslem, it's a great question and yes, it's normal to have two different-sized datasets to interleave!\r\n\r\nMy recommendation here would be to either use probabilities more biased towards the large model (e.g `[0.8, 0.2]`) so that the big dataset is exhausted more quickly, or to not use probabilities altogether - in that case, `new_d` length will be 16 (`nb_datasets*len(largest_dataset)`).\r\n\r\nLet me know if I need to be clearer!\r\n ",
"@ylacombe Many thanks for your prompt response! As we needed to implement certain oversampling experiments, we ended up using Pandas.\r\n\r\nConsidering each dataset a class with a distinct \"label\":\r\n```python\r\nimport pandas as pd\r\n\r\ndef oversample(df):\r\n classes = df.label.value_counts().to_dict()\r\n most = max(classes.values())\r\n classes_list = []\r\n for key in classes:\r\n classes_list.append(df[df['label'] == key])\r\n classes_sample = []\r\n for i in range(1,len(classes_list)):\r\n classes_sample.append(classes_list[i].sample(most, replace=True))\r\n df_maybe = pd.concat(classes_sample)\r\n final_df = pd.concat([df_maybe,classes_list[0]], axis=0)\r\n final_df = final_df.reset_index(drop=True)\r\n return final_df\r\n```\r\n[Reference](https://medium.com/analytics-vidhya/undersampling-and-oversampling-an-old-and-a-new-approach-4f984a0e8392)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1468/comments | https://api.github.com/repos/huggingface/datasets/issues/1468/events | https://github.com/huggingface/datasets/pull/1468 | 761,607,531 | MDExOlB1bGxSZXF1ZXN0NTM2MjQ5OTg0 | 1,468 | add Indonesian newspapers (id_newspapers_2018) | [] | closed | false | null | 6 | 2020-12-10T20:54:12Z | 2020-12-12T08:50:51Z | 2020-12-11T17:04:41Z | null | The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers. The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1468/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1468",
"merged_at": "2020-12-11T17:04:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1468"
} | true | [
"Looks like there's a `Path` issue on windows. Could you try switching to\r\n`glob.glob(os.path.join(article_dir, \"*.json\"))`",
"> Looks like there's a `Path` issue on windows. Could you try switching to\r\n> `glob.glob(os.path.join(article_dir, \"*.json\"))`\r\n\r\nThanks, I replaced it with glob. Let's see if it solves the issue. Anyway, the main directory has a space, could it make the issue on windows? the test on linux don't have this problem.",
"It seems glob doesn't help also. Btw, one of the failing test tried to connect aws which failed:\r\n```\r\nC:\\tools\\miniconda3\\lib\\site-packages\\urllib3\\connection.py:160: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\naddress = ('s3.amazonaws.com', 443), timeout = 10, source_address = None\r\nsocket_options = [(6, 1, 1)]\r\n\r\n```\r\nWhy did it try to connect to aws? I don't use it.",
"It seems that the circleci make a test for whole datasets repository, that means if only one of the dataset in the official repository has a download issue, this will also affect the test of a new dataset like mine, isn't it?\r\nI changed the url to my newspaper dataset which contains only few simple json files and simple directory structure. But it still failed. And it failed not only on windows test. This is one of the error message:\r\n```\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\n===== 4 failed, 2667 passed, 2052 skipped, 4 warnings in 432.05s (0:07:12) =====\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\nThe test failed on twitter dataset even my dataset has nothing to do with twitter? ",
"merging since the CI is fixed on master",
"Hi, thanks for merging the dataset. I create a new PR (#1499) since I need to update the link to the dataset. "
] |
https://api.github.com/repos/huggingface/datasets/issues/4790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4790/comments | https://api.github.com/repos/huggingface/datasets/issues/4790/events | https://github.com/huggingface/datasets/issues/4790 | 1,328,546,904 | I_kwDODunzps5PMARY | 4,790 | Issue with fine classes in trec dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-08-04T12:28:51Z | 2022-08-22T16:14:16Z | 2022-08-22T16:14:16Z | null | ## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated for several coarse classes:
- We have one `desc` fine label instead of 2:
- `DESC:desc`
- `HUM:desc`
- We have one `other` fine label instead of 3:
- `ENTY:other`
- `LOC:other`
- `NUM:other`
From their paper:
> We define a two-layered taxonomy, which represents a natural semantic classification for typical answers in the TREC task. The hierarchy contains 6 coarse classes and 50 fine classes,
> Each coarse class contains a non-overlapping set of fine classes.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4790/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4855/comments | https://api.github.com/repos/huggingface/datasets/issues/4855/events | https://github.com/huggingface/datasets/issues/4855 | 1,339,699,975 | I_kwDODunzps5P2jMH | 4,855 | Dataset Viewer issue for super_glue | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-08-16T01:34:56Z | 2022-08-22T10:08:01Z | 2022-08-22T10:07:45Z | null | ### Link
https://huggingface.co/datasets/super_glue
### Description
can't view super_glue dataset on the web page
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4855/timeline | null | completed | null | null | false | [
"Thanks for reporting @wzsxxa.\r\n\r\nHowever the \"super_glue\" dataset is rendered properly by the Dataset preview: https://huggingface.co/datasets/super_glue"
] |
https://api.github.com/repos/huggingface/datasets/issues/726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/726/comments | https://api.github.com/repos/huggingface/datasets/issues/726/events | https://github.com/huggingface/datasets/issues/726 | 719,313,754 | MDU6SXNzdWU3MTkzMTM3NTQ= | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | [] | closed | false | null | 8 | 2020-10-12T11:45:10Z | 2022-02-17T17:53:54Z | 2022-02-15T10:38:57Z | null | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/726/timeline | null | completed | null | null | false | [
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).\r\n\r\nI have update the description, sorry for the incomplete issue by mistake.",
"Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:\r\n```\r\n>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset openwebtext/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nDataset openwebtext downloaded and prepared to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02. Subsequent calls will reuse this data.\r\n>>> len(dataset['train'])\r\n74571\r\n>>>\r\n```\r\nThe size of the pre-processed example file is only 354MB, however the processed bookcorpus dataset is 4.6g. Are there any problems?",
"NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n\r\ni got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo",
"Hi, I got the similar issue for xnli dataset while working on colab with python3.7. \r\n\r\n`nlp.load_dataset(path = 'xnli')`\r\n\r\nThe above command resulted in following issue : \r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']\r\n```\r\n\r\nAny idea how to fix this ?",
"Did anyone figure out how to fix this error?",
"Fixed by:\r\n- #2857",
"Says fixed but I'm still getting it. \r\n\r\ncommand:\r\n\r\n dataset = load_dataset(\"ted_talks_iwslt\", language_pair=(\"en\", \"es\"), year=\"2014\",download_mode=\"force_redownload\")\r\n\r\ngot:\r\n\r\nUsing custom data configuration en_es_2014-35a2d3350a0f9823\r\nDownloading and preparing dataset ted_talks_iwslt/en_es_2014 (download: 2.15 KiB, generated: Unknown size, post-processed: Unknown size, total: 2.15 KiB) to /home/ken/.cache/huggingface/datasets/ted_talks_iwslt/en_es_2014-35a2d3350a0f9823/1.1.0/43935b3fe470c753a023642e1f54b068c590847f9928bd3f2ec99f15702ad6a6...\r\nDownloading:\r\n2.21k/? [00:00<00:00, 141kB/s]\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download']"
] |
https://api.github.com/repos/huggingface/datasets/issues/5485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5485/comments | https://api.github.com/repos/huggingface/datasets/issues/5485/events | https://github.com/huggingface/datasets/pull/5485 | 1,563,002,829 | PR_kwDODunzps5I2ER2 | 5,485 | Add section in tutorial for IterableDataset | [] | closed | false | null | 2 | 2023-01-30T18:43:04Z | 2023-02-01T18:15:38Z | 2023-02-01T18:08:46Z | null | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new doc introduced in:
- #5410 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5485/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"merged_at": "2023-02-01T18:08:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5485"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008492 / 0.011353 (-0.002861) | 0.004717 / 0.011008 (-0.006292) | 0.101111 / 0.038508 (0.062602) | 0.029129 / 0.023109 (0.006019) | 0.307564 / 0.275898 (0.031666) | 0.367038 / 0.323480 (0.043558) | 0.007105 / 0.007986 (-0.000881) | 0.003622 / 0.004328 (-0.000706) | 0.078370 / 0.004250 (0.074120) | 0.036960 / 0.037052 (-0.000093) | 0.315612 / 0.258489 (0.057123) | 0.353601 / 0.293841 (0.059760) | 0.032900 / 0.128546 (-0.095647) | 0.011405 / 0.075646 (-0.064241) | 0.322331 / 0.419271 (-0.096940) | 0.040823 / 0.043533 (-0.002710) | 0.306734 / 0.255139 (0.051595) | 0.328155 / 0.283200 (0.044955) | 0.087169 / 0.141683 (-0.054514) | 1.460543 / 1.452155 (0.008389) | 1.498094 / 1.492716 (0.005378) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011863 / 0.018006 (-0.006143) | 0.416315 / 0.000490 (0.415826) | 0.003463 / 0.000200 (0.003263) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023219 / 0.037411 (-0.014192) | 0.096469 / 0.014526 (0.081943) | 0.105960 / 0.176557 (-0.070596) | 0.148993 / 0.737135 (-0.588142) | 0.108112 / 0.296338 (-0.188226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415662 / 0.215209 (0.200453) | 4.155111 / 2.077655 (2.077456) | 1.834943 / 1.504120 (0.330823) | 1.622752 / 1.541195 (0.081557) | 1.701630 / 1.468490 (0.233140) | 0.690596 / 4.584777 (-3.894181) | 3.399385 / 3.745712 (-0.346327) | 3.140521 / 5.269862 (-2.129341) | 1.609152 / 4.565676 (-2.956524) | 0.082132 / 0.424275 (-0.342143) | 0.012343 / 0.007607 (0.004735) | 0.532715 / 0.226044 (0.306670) | 5.323032 / 2.268929 (3.054104) | 2.326625 / 55.444624 (-53.118000) | 1.944263 / 6.876477 (-4.932213) | 1.994015 / 2.142072 (-0.148058) | 0.813805 / 4.805227 (-3.991422) | 0.149233 / 6.500664 (-6.351431) | 0.065318 / 0.075469 (-0.010151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212441 / 1.841788 (-0.629347) | 13.979069 / 8.074308 (5.904761) | 14.003998 / 10.191392 (3.812606) | 0.146956 / 0.680424 (-0.533468) | 0.028564 / 0.534201 (-0.505637) | 0.392370 / 0.579283 (-0.186913) | 0.399695 / 0.434364 (-0.034669) | 0.473481 / 0.540337 (-0.066856) | 0.562625 / 1.386936 (-0.824311) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006821 / 0.011353 (-0.004532) | 0.004570 / 0.011008 (-0.006438) | 0.076217 / 0.038508 (0.037709) | 0.028888 / 0.023109 (0.005779) | 0.345431 / 0.275898 (0.069533) | 0.389246 / 0.323480 (0.065766) | 0.005939 / 0.007986 (-0.002046) | 0.003356 / 0.004328 (-0.000973) | 0.075880 / 0.004250 (0.071629) | 0.041427 / 0.037052 (0.004374) | 0.344481 / 0.258489 (0.085992) | 0.398508 / 0.293841 (0.104667) | 0.031801 / 0.128546 (-0.096745) | 0.011763 / 0.075646 (-0.063884) | 0.085600 / 0.419271 (-0.333672) | 0.042656 / 0.043533 (-0.000876) | 0.345893 / 0.255139 (0.090754) | 0.376910 / 0.283200 (0.093711) | 0.092451 / 0.141683 (-0.049232) | 1.461222 / 1.452155 (0.009068) | 1.555822 / 1.492716 (0.063106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235781 / 0.018006 (0.217774) | 0.418485 / 0.000490 (0.417995) | 0.005560 / 0.000200 (0.005360) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025410 / 0.037411 (-0.012001) | 0.103780 / 0.014526 (0.089254) | 0.110183 / 0.176557 (-0.066374) | 0.151097 / 0.737135 (-0.586039) | 0.112539 / 0.296338 (-0.183799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436686 / 0.215209 (0.221477) | 4.341594 / 2.077655 (2.263940) | 2.062309 / 1.504120 (0.558190) | 1.857461 / 1.541195 (0.316267) | 1.947204 / 1.468490 (0.478713) | 0.699641 / 4.584777 (-3.885136) | 3.406983 / 3.745712 (-0.338729) | 3.294705 / 5.269862 (-1.975157) | 1.360582 / 4.565676 (-3.205095) | 0.083025 / 0.424275 (-0.341250) | 0.012461 / 0.007607 (0.004854) | 0.537767 / 0.226044 (0.311722) | 5.393316 / 2.268929 (3.124387) | 2.516692 / 55.444624 (-52.927932) | 2.163987 / 6.876477 (-4.712490) | 2.220480 / 2.142072 (0.078408) | 0.810648 / 4.805227 (-3.994579) | 0.151820 / 6.500664 (-6.348844) | 0.068080 / 0.075469 (-0.007389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279382 / 1.841788 (-0.562405) | 13.989947 / 8.074308 (5.915638) | 14.039229 / 10.191392 (3.847836) | 0.141071 / 0.680424 (-0.539352) | 0.017118 / 0.534201 (-0.517083) | 0.381558 / 0.579283 (-0.197725) | 0.390407 / 0.434364 (-0.043957) | 0.440920 / 0.540337 (-0.099418) | 0.525478 / 1.386936 (-0.861458) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2918/comments | https://api.github.com/repos/huggingface/datasets/issues/2918/events | https://github.com/huggingface/datasets/issues/2918 | 997,063,347 | I_kwDODunzps47bfqz | 2,918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 3 | 2021-09-15T13:06:07Z | 2021-12-01T08:15:00Z | 2021-12-01T08:15:00Z | null | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2918/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2918/timeline | null | completed | null | null | false | [
"Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...",
"Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```",
"Thanks for investigating @albertvillanova ! 🤗 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4241/comments | https://api.github.com/repos/huggingface/datasets/issues/4241/events | https://github.com/huggingface/datasets/issues/4241 | 1,217,423,686 | I_kwDODunzps5IkGlG | 4,241 | NonMatchingChecksumError when attempting to download GLUE | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-04-27T14:14:21Z | 2022-04-28T07:45:27Z | 2022-04-28T07:45:27Z | null | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to download without an error.
## Actual results
```
INFO:nlp.load:Checking /home/richier/.cache/huggingface/datasets/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports.
INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue
INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4
INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.py
INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/dataset_infos.json to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/dataset_infos.json
INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.json
INFO:nlp.info:Loading Dataset Infos from /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4
INFO:nlp.builder:Generating dataset glue (/home/richier/.cache/huggingface/datasets/glue/rte/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
INFO:nlp.utils.file_utils:Couldn't get ETag version for url https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb
INFO:nlp.utils.file_utils:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to /home/richier/.cache/huggingface/datasets/downloads/tmpldt3n805
Downloading and preparing dataset glue/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to /home/richier/.cache/huggingface/datasets/glue/rte/1.0.0...
Downloading: 100%|██████████| 73.0/73.0 [00:00<00:00, 73.9kB/s]
INFO:nlp.utils.file_utils:storing https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64
INFO:nlp.utils.file_utils:creating metadata file for /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-7-669a8343dcc1> in <module>
----> 1 nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
418 verify_infos = not save_infos and not ignore_verifications
419 self._download_and_prepare(
--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
422 # Sync info
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
458 # Checksums verification
459 if verify_infos:
--> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())
461 for split_generator in split_generators:
462 if str(split_generator.split_info.name).lower() == "all":
~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums)
34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]
35 if len(bad_urls) > 0:
---> 36 raise NonMatchingChecksumError(str(bad_urls))
37 logger.info("All the checksums matched successfully.")
38
NonMatchingChecksumError: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa
- Python version: 3.6.13
- PyArrow version: 6.0.1
- Pandas version: 1.1.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4241/timeline | null | completed | null | null | false | [
"Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"glue\", \"rte\")\r\n```",
"This appears to work. Thank you!\n\nOn Wed, Apr 27, 2022, 1:18 PM Steven Liu ***@***.***> wrote:\n\n> Hi :)\n>\n> I think your issue may be related to the older nlp library. I was able to\n> download glue with the latest version of datasets. Can you try updating\n> with:\n>\n> pip install -U datasets\n>\n> Then you can download:\n>\n> from datasets import load_datasetds = load_dataset(\"glue\", \"rte\")\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/4241#issuecomment-1111267650>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACJUEKLUP2EL7ES3RRWJRPTVHFZHBANCNFSM5UPJBYXA>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1169/comments | https://api.github.com/repos/huggingface/datasets/issues/1169/events | https://github.com/huggingface/datasets/pull/1169 | 757,747,997 | MDExOlB1bGxSZXF1ZXN0NTMzMDY5MzAx | 1,169 | Add Opus fiskmo dataset for Finnish and Swedish for MT task | [] | closed | false | null | 1 | 2020-12-05T17:56:55Z | 2020-12-07T11:04:11Z | 2020-12-07T11:04:11Z | null | Adding fiskmo, a massive parallel corpus for Finnish and Swedish.
for more info : http://opus.nlpl.eu/fiskmo.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1169/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1169",
"merged_at": "2020-12-07T11:04:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1169"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/6056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6056/comments | https://api.github.com/repos/huggingface/datasets/issues/6056/events | https://github.com/huggingface/datasets/pull/6056 | 1,815,086,963 | PR_kwDODunzps5WD4RY | 6,056 | Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded | [] | open | false | null | 3 | 2023-07-21T03:13:21Z | 2023-07-24T15:17:28Z | null | null | Context: issue #5990
In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get the push_to_hub function to retrieve on demand past history of uploads and continue mapping and uploading from where it was left off. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6056/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6056.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6056",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6056.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6056"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6056). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq Reading the filenames is something I tried earlier, but I decided to use the yaml direction because:\r\n\r\n1. The yaml file name is constructed to retain information about the shard_size, and total number of shards, hence ensuring that the files uploaded are not just files that have the same name but actually represent a different configuration of shard_size, and total number of shards. \r\n2. Remembering the total file size is done easily in the yaml, whereas alternatively I am not sure how one could access the file size of the uploaded files without downloading them.\r\n3. I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it. \r\n\r\nIf 1 and 2 can be achieved without an additional yaml, then I would be willing to make those changes. Let me know of any ideas. 1. could be done by changing the data file names, but I'd rather not do that as to prevent breaking existing datasets that try to upload updates to their data. ",
"If the file name depends on the shard's fingerprint **before** mapping then we can know if a shard has been uploaded before mapping and without requiring an extra YAML file. It should do the job imo\r\n\r\n> I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it.\r\n\r\nwhat was the issue ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6046/comments | https://api.github.com/repos/huggingface/datasets/issues/6046/events | https://github.com/huggingface/datasets/issues/6046 | 1,808,154,414 | I_kwDODunzps5rxj8u | 6,046 | Support proxy and user-agent in fsspec calls | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | null | 0 | 2023-07-17T16:39:26Z | 2023-07-17T16:40:37Z | null | null | Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem` could support passing at least the proxies | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6046/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4149/comments | https://api.github.com/repos/huggingface/datasets/issues/4149/events | https://github.com/huggingface/datasets/issues/4149 | 1,201,389,221 | I_kwDODunzps5Hm76l | 4,149 | load_dataset for winoground returning decoding error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 10 | 2022-04-12T08:16:16Z | 2022-05-04T23:40:38Z | 2022-05-04T23:40:38Z | null | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected results
I downloaded images.zip and examples.jsonl manually. I was expecting to have some trouble decoding json so I didn't use jsonlines but instead was able to get a complete set of 400 examples by doing
```python
import json
with open('examples.jsonl', 'r') as f:
examples = f.read().split('\n')
# Thinking this would error if the JSON is not utf-8 encoded
json_data = [json.loads(x) for x in examples]
print(json_data[-1])
```
and I see
```python
{'caption_0': 'someone is overdoing it',
'caption_1': 'someone is doing it over',
'collapsed_tag': 'Relation',
'id': 399,
'image_0': 'ex_399_img_0',
'image_1': 'ex_399_img_1',
'num_main_preds': 1,
'secondary_tag': 'Morpheme-Level',
'tag': 'Scope, Preposition'}
```
so I'm not sure what's going on here honestly. The file `examples.jsonl` doesn't have non-UTF-8 encoded text.
## Actual results
During the split operation after downloading, datasets encounters an error in the JSON ([trace](https://gist.github.com/odellus/e55d390ca203386bf551f38e0c63a46b) abbreviated for brevity).
```
datasets/packaged_modules/json/json.py:144 in Json._generate_tables(self, files)
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4149/timeline | null | completed | null | null | false | [
"I thought I had fixed it with this after some helpful hints from @severo\r\n```python\r\nimport datasets \r\ntoken = 'hf_XXXXX'\r\ndataset = datasets.load_dataset(\r\n 'facebook/winoground', \r\n name='facebook--winoground', \r\n split='train', \r\n streaming=True,\r\n use_auth_token=token,\r\n)\r\n```\r\nbut I found out that wasn't the case\r\n```python\r\n[x for x in dataset]\r\n...\r\nClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```",
"Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069 \r\n\r\nThe following structure will be supported soon:\r\n```\r\nmetadata.json\r\nimages/\r\n image0.png\r\n image1.png\r\n ...\r\n```\r\nWhere `metadata.json` is a JSON Lines file with labels or other metadata, and each line must have a \"file_name\" field with the name of the image file.\r\n\r\nFor the moment are only supported:\r\n- JSON files only\r\n- image files only\r\n\r\nSince this dataset is a mix of the two, at the moment it fails trying to read the images as JSON.\r\n\r\nTherefore to be able to load this dataset we need to wait for the new structure to be supported (very soon ^^), or add a dataset script in the repository that reads both the JSON and the images cc @TristanThrush \r\n",
"We'll also investigate the issue with the streaming download manager in https://github.com/huggingface/datasets/issues/4139 ;) thanks for reporting",
"Are there any updates on this?",
"In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that.",
"I mirrored the files at https://huggingface.co/datasets/facebook/winoground in a folder on my local machine `winground`\r\nand when I tried\r\n```python\r\nimport datasets\r\nds = datasets.load_from_disk('./winoground')\r\n```\r\nI get the following error\r\n```python\r\n--------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 ds = datasets.load_from_disk('./winoground')\r\n\r\nFile ~/.local/lib/python3.8/site-packages/datasets/load.py:1759, in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1757 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1758 else:\r\n-> 1759 raise FileNotFoundError(\r\n 1760 f\"Directory {dataset_path} is neither a dataset directory nor a dataset dict directory.\"\r\n 1761 )\r\n\r\nFileNotFoundError: Directory ./winoground is neither a dataset directory nor a dataset dict directory.\r\n```\r\nso still some work to be done on the backend imo.",
"Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.\r\n\r\nOnce we do support images with metadata you'll be able to use `load_dataset(\"facebook/winoground\")` directly (or `load_dataset(\"./winoground\")` of you've cloned the winoground repository locally).",
"Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:\r\n\r\n`examples = load_dataset('facebook/winoground', use_auth_token=<your auth token>)`\r\n\r\nLet me know if there are any issues",
"Adding the dataset loading script definitely didn't take as long as I thought it would 😅",
"killer"
] |
https://api.github.com/repos/huggingface/datasets/issues/4734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4734/comments | https://api.github.com/repos/huggingface/datasets/issues/4734/events | https://github.com/huggingface/datasets/issues/4734 | 1,314,495,382 | I_kwDODunzps5OWZuW | 4,734 | Package rouge-score cannot be imported | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-07-22T07:15:05Z | 2022-07-22T07:45:19Z | 2022-07-22T07:45:18Z | null | ## Describe the bug
After the today release of `rouge_score-0.0.7` it seems no longer importable. Our CI fails: https://github.com/huggingface/datasets/runs/7463218591?check_suite_focus=true
```
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_bigbench
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_configs_bigbench
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_bigbench
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_rouge
```
with errors:
```
> from rouge_score import rouge_scorer
E ModuleNotFoundError: No module named 'rouge_score'
```
```
E ImportError: To be able to use rouge, you need to install the following dependency: rouge_score.
E Please install it using 'pip install rouge_score' for instance'
```
| {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4734/timeline | null | completed | null | null | false | [
"We have added a comment on an existing issue opened in their repo: https://github.com/google-research/google-research/issues/1212#issuecomment-1192267130\r\n- https://github.com/google-research/google-research/issues/1212"
] |
https://api.github.com/repos/huggingface/datasets/issues/350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/350/comments | https://api.github.com/repos/huggingface/datasets/issues/350/events | https://github.com/huggingface/datasets/pull/350 | 652,398,691 | MDExOlB1bGxSZXF1ZXN0NDQ1NDczODYz | 350 | add from_pandas and from_dict | [] | closed | false | null | 0 | 2020-07-07T15:03:53Z | 2020-07-08T14:14:33Z | 2020-07-08T14:14:32Z | null | I added two new methods to the `Dataset` class:
- `from_pandas()` to create a dataset from a pandas dataframe
- `from_dict()` to create a dataset from a dictionary (keys = columns)
It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.
It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow.
One question that I have right now:
+ Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/350/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/350.diff",
"html_url": "https://github.com/huggingface/datasets/pull/350",
"merged_at": "2020-07-08T14:14:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/350.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/350"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5999/comments | https://api.github.com/repos/huggingface/datasets/issues/5999/events | https://github.com/huggingface/datasets/issues/5999 | 1,781,851,513 | I_kwDODunzps5qNOV5 | 5,999 | Getting a 409 error while loading xglue dataset | [] | closed | false | null | 1 | 2023-06-30T04:13:54Z | 2023-06-30T05:57:23Z | 2023-06-30T05:57:22Z | null | ### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the dataset to load
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5999/timeline | null | completed | null | null | false | [
"Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https://huggingface.co/datasets/xglue/discussions/5"
] |
https://api.github.com/repos/huggingface/datasets/issues/3169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3169/comments | https://api.github.com/repos/huggingface/datasets/issues/3169/events | https://github.com/huggingface/datasets/pull/3169 | 1,036,773,357 | PR_kwDODunzps4ttYmZ | 3,169 | Configurable max filename length in file locks | [] | closed | false | null | 2 | 2021-10-26T21:52:55Z | 2021-10-28T16:14:14Z | 2021-10-28T16:14:13Z | null | Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be set in the config module allows this to be modified by users. Will not affect Windows users, as their class passes 255 on init explicitly.
Reproduced with the following example ([the first few lines of a script from Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html), fine-tuning a HF model):
```py
import torch
import flash
from flash.audio import SpeechRecognition, SpeechRecognitionData
from flash.core.data.utils import download_data
# 1. Create the DataModule
download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data")
datamodule = SpeechRecognitionData.from_json(
input_fields="file",
target_fields="text",
train_file="data/timit/train.json",
test_file="data/timit/test.json",
)
```
Which gave this traceback:
```py
Traceback (most recent call last):
File "lf_ft.py", line 10, in <module>
datamodule = SpeechRecognitionData.from_json(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json
return cls.from_data_source(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source
train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets
train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset
data = load_data(data, mock_dataset)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data
dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1599, in load_dataset
builder_instance = load_dataset_builder(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1457, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py", line 285, in __init__
with FileLock(lock_path):
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'
```
Note the filename is 145 chars long:
```
>>> len("_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock")
145
```
After installing datasets as an editable local package and modifying the script I was running to first include:
```py
import datasets
datasets.config.MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 143
```
The error goes away.
If I instead deliberately set the value incorrectly as 144, the OSError returns:
```
Traceback (most recent call last):
File "lf_ft.py", line 14, in <module>
datamodule = SpeechRecognitionData.from_json(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json
return cls.from_data_source(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source
train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets
train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset
data = load_data(data, mock_dataset)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data
dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})
File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1605, in load_dataset
builder_instance = load_dataset_builder(
File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1463, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/louis/dev/hf_datasets/src/datasets/builder.py", line 285, in __init__
with FileLock(lock_path):
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 326, in __enter__
self.acquire()
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 275, in acquire
self._acquire()
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 406, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-32c812b5c1272d64_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279...-5794079643713042223.lock'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3169/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3169"
} | true | [
"I've also added environment variable configuration so that this can be configured once per machine (e.g. in a `.bashrc` file), as is already done for a few other config variables here.",
"Cancelling PR in favour of @mariosasko's in #3173"
] |
https://api.github.com/repos/huggingface/datasets/issues/4626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4626/comments | https://api.github.com/repos/huggingface/datasets/issues/4626/events | https://github.com/huggingface/datasets/issues/4626 | 1,293,256,269 | I_kwDODunzps5NFYZN | 4,626 | Add non-commercial licensing info for datasets for which we removed tags | [] | open | false | null | 1 | 2022-07-04T14:32:43Z | 2022-07-08T14:27:29Z | null | null | We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753
Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv)
We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4626/timeline | null | null | null | null | false | [
"yep plus `license_details` also makes sense for this IMO"
] |
https://api.github.com/repos/huggingface/datasets/issues/2090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2090/comments | https://api.github.com/repos/huggingface/datasets/issues/2090/events | https://github.com/huggingface/datasets/pull/2090 | 836,807,498 | MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy | 2,090 | Add machine translated multilingual STS benchmark dataset | [] | closed | false | null | 6 | 2021-03-20T13:28:07Z | 2021-03-29T13:24:42Z | 2021-03-29T13:00:15Z | null | also see here https://github.com/PhilipMay/stsb-multi-mt | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2090/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2090.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2090",
"merged_at": "2021-03-29T13:00:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2090.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2090"
} | true | [
"Hello dear maintainer, are there any comments or questions about this PR?",
"@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...",
"Should be clean for merge IMO.",
"@lhoestq CI is green. ;-)",
"Thanks again ! this is awesome :)",
"Thanks for merging. :-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1207/comments | https://api.github.com/repos/huggingface/datasets/issues/1207/events | https://github.com/huggingface/datasets/pull/1207 | 757,953,830 | MDExOlB1bGxSZXF1ZXN0NTMzMjE3MDA4 | 1,207 | Add msr_genomics_kbcomp Dataset | [] | closed | false | null | 0 | 2020-12-06T15:40:05Z | 2020-12-07T15:55:17Z | 2020-12-07T15:55:11Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1207/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1207.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1207",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1207.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1207"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/2714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2714/comments | https://api.github.com/repos/huggingface/datasets/issues/2714/events | https://github.com/huggingface/datasets/issues/2714 | 952,580,820 | MDU6SXNzdWU5NTI1ODA4MjA= | 2,714 | add more precise information for size | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2021-07-26T07:11:03Z | 2021-07-26T09:16:25Z | null | null | For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2714/timeline | null | null | null | null | false | [
"We already have this information in the dataset_infos.json files of each dataset.\r\nMaybe we can parse these files in the backend to return their content with the endpoint at huggingface.co/api/datasets\r\n\r\nFor now if you want to access this info you have to load the json for each dataset. For example:\r\n- for a dataset on github like `squad` \r\n- https://raw.githubusercontent.com/huggingface/datasets/master/datasets/squad/dataset_infos.json\r\n- for a community dataset on the hub like `lhoestq/squad`:\r\n https://huggingface.co/datasets/lhoestq/squad/resolve/main/dataset_infos.json"
] |
https://api.github.com/repos/huggingface/datasets/issues/811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/811/comments | https://api.github.com/repos/huggingface/datasets/issues/811/events | https://github.com/huggingface/datasets/issues/811 | 738,280,132 | MDU6SXNzdWU3MzgyODAxMzI= | 811 | nlp viewer error | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 3 | 2020-11-07T17:08:58Z | 2022-02-15T10:51:44Z | 2022-02-14T15:24:20Z | null | Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/811/timeline | null | completed | null | null | false | [
"and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n\r\n",
"Is this the problem of my local computer or ??",
"Related to:\r\n- #673"
] |
https://api.github.com/repos/huggingface/datasets/issues/5246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5246/comments | https://api.github.com/repos/huggingface/datasets/issues/5246/events | https://github.com/huggingface/datasets/pull/5246 | 1,451,226,055 | PR_kwDODunzps5DASLI | 5,246 | Release: 2.7.0 | [] | closed | false | null | 1 | 2022-11-16T09:32:44Z | 2022-11-16T09:39:42Z | 2022-11-16T09:37:03Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5246/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5246.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5246",
"merged_at": "2022-11-16T09:37:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5246.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5246"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/936/comments | https://api.github.com/repos/huggingface/datasets/issues/936/events | https://github.com/huggingface/datasets/pull/936 | 753,915,603 | MDExOlB1bGxSZXF1ZXN0NTI5OTAxODMw | 936 | Added HANS parses and categories | [] | closed | false | null | 0 | 2020-12-01T00:58:16Z | 2020-12-01T13:19:41Z | 2020-12-01T13:19:40Z | null | This pull request adds HANS missing information: the sentence parses, as well as the heuristic category. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/936/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/936",
"merged_at": "2020-12-01T13:19:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/936"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3942/comments | https://api.github.com/repos/huggingface/datasets/issues/3942/events | https://github.com/huggingface/datasets/issues/3942 | 1,171,177,122 | I_kwDODunzps5Fzr6i | 3,942 | reddit_tifu dataset: Checksums didn't match for dataset source files | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 3 | 2022-03-16T15:23:30Z | 2022-03-16T15:57:43Z | 2022-03-16T15:39:25Z | null | ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3942/timeline | null | completed | null | null | false | [
"Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773",
"thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset(\"...\", download_mode=\"force_redownload\")` fixed \r\n the bug.\r\n\r\nusing the following as you suggested in another thread can also fixed the bug\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n",
"The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.\r\n\r\nYou can now install from PyPI, as usual:\r\n```shell\r\npip install -U datasets\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5321/comments | https://api.github.com/repos/huggingface/datasets/issues/5321/events | https://github.com/huggingface/datasets/pull/5321 | 1,471,430,667 | PR_kwDODunzps5EEOhE | 5,321 | Fix loading from HF GCP cache | [] | closed | false | null | 2 | 2022-12-01T14:39:06Z | 2022-12-01T16:10:09Z | 2022-12-01T16:07:02Z | null | As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache
I fixed it and added an integration test (runs in 10sec) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5321/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5321",
"merged_at": "2022-12-01T16:07:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5321"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126"
] |
https://api.github.com/repos/huggingface/datasets/issues/368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/368/comments | https://api.github.com/repos/huggingface/datasets/issues/368/events | https://github.com/huggingface/datasets/issues/368 | 654,087,251 | MDU6SXNzdWU2NTQwODcyNTE= | 368 | load_metric can't acquire lock anymore | [] | closed | false | null | 1 | 2020-07-09T14:04:09Z | 2020-07-10T13:45:20Z | 2020-07-10T13:45:20Z | null | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__
self.filelock.acquire(timeout=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire
raise Timeout(self._lock_file)
filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples_huggingface_nlp.py", line 268, in <module>
main()
File "examples_huggingface_nlp.py", line 242, in main
dataset, metric = get_dataset_metric(glue_task)
File "examples_huggingface_nlp.py", line 77, in get_dataset_metric
metric = nlp.load_metric('glue', glue_config, experiment_id=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric
**metric_init_kwargs,
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__
"Cannot acquire lock, caching file might be used by another process, "
ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.
I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/368/timeline | null | completed | null | null | false | [
"I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'`."
] |
https://api.github.com/repos/huggingface/datasets/issues/760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/760/comments | https://api.github.com/repos/huggingface/datasets/issues/760/events | https://github.com/huggingface/datasets/issues/760 | 729,637,917 | MDU6SXNzdWU3Mjk2Mzc5MTc= | 760 | Add meta-data to the HANS dataset | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 0 | 2020-10-26T14:56:53Z | 2020-12-03T13:38:34Z | 2020-12-03T13:38:34Z | null | The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/760/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5254/comments | https://api.github.com/repos/huggingface/datasets/issues/5254/events | https://github.com/huggingface/datasets/pull/5254 | 1,452,600,088 | PR_kwDODunzps5DE47u | 5,254 | typo | [] | closed | false | null | 0 | 2022-11-17T02:39:57Z | 2022-11-18T10:53:45Z | 2022-11-18T10:53:45Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5254/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5254",
"merged_at": "2022-11-18T10:53:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5254"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5867/comments | https://api.github.com/repos/huggingface/datasets/issues/5867/events | https://github.com/huggingface/datasets/pull/5867 | 1,710,656,067 | PR_kwDODunzps5QizOn | 5,867 | Add logic for hashing modules/functions optimized with `torch.compile` | [] | open | false | null | 4 | 2023-05-15T19:03:35Z | 2023-05-17T13:41:48Z | null | null | Fix https://github.com/huggingface/datasets/issues/5839
PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5867/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5867/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5867.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5867",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5867.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5867"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004565 / 0.011008 (-0.006443) | 0.099063 / 0.038508 (0.060555) | 0.028334 / 0.023109 (0.005225) | 0.323539 / 0.275898 (0.047641) | 0.372462 / 0.323480 (0.048982) | 0.005120 / 0.007986 (-0.002865) | 0.004797 / 0.004328 (0.000468) | 0.076862 / 0.004250 (0.072611) | 0.038021 / 0.037052 (0.000968) | 0.337801 / 0.258489 (0.079312) | 0.374601 / 0.293841 (0.080760) | 0.031158 / 0.128546 (-0.097389) | 0.011672 / 0.075646 (-0.063974) | 0.324913 / 0.419271 (-0.094359) | 0.051702 / 0.043533 (0.008169) | 0.339440 / 0.255139 (0.084301) | 0.372502 / 0.283200 (0.089303) | 0.097590 / 0.141683 (-0.044093) | 1.534238 / 1.452155 (0.082083) | 1.599701 / 1.492716 (0.106985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204101 / 0.018006 (0.186095) | 0.416981 / 0.000490 (0.416491) | 0.003436 / 0.000200 (0.003236) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023527 / 0.037411 (-0.013885) | 0.095748 / 0.014526 (0.081222) | 0.104498 / 0.176557 (-0.072059) | 0.164000 / 0.737135 (-0.573135) | 0.109170 / 0.296338 (-0.187168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418239 / 0.215209 (0.203030) | 4.153959 / 2.077655 (2.076305) | 1.856687 / 1.504120 (0.352567) | 1.657818 / 1.541195 (0.116623) | 1.715146 / 1.468490 (0.246656) | 0.700673 / 4.584777 (-3.884103) | 3.401060 / 3.745712 (-0.344652) | 2.891045 / 5.269862 (-2.378816) | 1.519433 / 4.565676 (-3.046243) | 0.083151 / 0.424275 (-0.341124) | 0.012352 / 0.007607 (0.004745) | 0.523901 / 0.226044 (0.297856) | 5.288871 / 2.268929 (3.019943) | 2.322806 / 55.444624 (-53.121818) | 1.982223 / 6.876477 (-4.894253) | 2.074883 / 2.142072 (-0.067189) | 0.812400 / 4.805227 (-3.992827) | 0.152183 / 6.500664 (-6.348481) | 0.066538 / 0.075469 (-0.008931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223220 / 1.841788 (-0.618567) | 14.024391 / 8.074308 (5.950083) | 14.166657 / 10.191392 (3.975265) | 0.146017 / 0.680424 (-0.534407) | 0.016698 / 0.534201 (-0.517503) | 0.380779 / 0.579283 (-0.198504) | 0.387113 / 0.434364 (-0.047251) | 0.446329 / 0.540337 (-0.094009) | 0.523819 / 1.386936 (-0.863118) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006803 / 0.011353 (-0.004549) | 0.004554 / 0.011008 (-0.006454) | 0.077406 / 0.038508 (0.038897) | 0.028495 / 0.023109 (0.005386) | 0.358847 / 0.275898 (0.082949) | 0.393256 / 0.323480 (0.069776) | 0.005317 / 0.007986 (-0.002669) | 0.004690 / 0.004328 (0.000362) | 0.075842 / 0.004250 (0.071592) | 0.041985 / 0.037052 (0.004933) | 0.367546 / 0.258489 (0.109057) | 0.408019 / 0.293841 (0.114178) | 0.030712 / 0.128546 (-0.097834) | 0.011756 / 0.075646 (-0.063891) | 0.086002 / 0.419271 (-0.333269) | 0.038949 / 0.043533 (-0.004583) | 0.361045 / 0.255139 (0.105906) | 0.381728 / 0.283200 (0.098528) | 0.090692 / 0.141683 (-0.050991) | 1.493251 / 1.452155 (0.041097) | 1.584566 / 1.492716 (0.091850) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217470 / 0.018006 (0.199463) | 0.429955 / 0.000490 (0.429465) | 0.000394 / 0.000200 (0.000194) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026223 / 0.037411 (-0.011189) | 0.102570 / 0.014526 (0.088045) | 0.110848 / 0.176557 (-0.065709) | 0.162413 / 0.737135 (-0.574722) | 0.114579 / 0.296338 (-0.181760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464957 / 0.215209 (0.249748) | 4.656597 / 2.077655 (2.578942) | 2.279755 / 1.504120 (0.775636) | 2.230263 / 1.541195 (0.689068) | 2.341540 / 1.468490 (0.873050) | 0.699505 / 4.584777 (-3.885272) | 3.389003 / 3.745712 (-0.356709) | 1.867526 / 5.269862 (-3.402336) | 1.167171 / 4.565676 (-3.398506) | 0.083451 / 0.424275 (-0.340824) | 0.012348 / 0.007607 (0.004741) | 0.584205 / 0.226044 (0.358161) | 5.853623 / 2.268929 (3.584694) | 2.646650 / 55.444624 (-52.797974) | 2.286504 / 6.876477 (-4.589973) | 2.327536 / 2.142072 (0.185464) | 0.811209 / 4.805227 (-3.994018) | 0.151842 / 6.500664 (-6.348822) | 0.067783 / 0.075469 (-0.007686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330427 / 1.841788 (-0.511360) | 14.668981 / 8.074308 (6.594673) | 13.321154 / 10.191392 (3.129762) | 0.164383 / 0.680424 (-0.516040) | 0.016667 / 0.534201 (-0.517534) | 0.383439 / 0.579283 (-0.195844) | 0.392988 / 0.434364 (-0.041376) | 0.443318 / 0.540337 (-0.097020) | 0.537849 / 1.386936 (-0.849087) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006379 / 0.011353 (-0.004974) | 0.004691 / 0.011008 (-0.006317) | 0.098047 / 0.038508 (0.059539) | 0.028126 / 0.023109 (0.005017) | 0.327143 / 0.275898 (0.051245) | 0.362482 / 0.323480 (0.039002) | 0.004953 / 0.007986 (-0.003033) | 0.003386 / 0.004328 (-0.000943) | 0.076222 / 0.004250 (0.071971) | 0.037583 / 0.037052 (0.000531) | 0.329661 / 0.258489 (0.071172) | 0.365945 / 0.293841 (0.072104) | 0.030455 / 0.128546 (-0.098091) | 0.011397 / 0.075646 (-0.064249) | 0.323889 / 0.419271 (-0.095383) | 0.043719 / 0.043533 (0.000186) | 0.331499 / 0.255139 (0.076360) | 0.359357 / 0.283200 (0.076158) | 0.088904 / 0.141683 (-0.052779) | 1.458584 / 1.452155 (0.006429) | 1.549375 / 1.492716 (0.056658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195808 / 0.018006 (0.177802) | 0.411148 / 0.000490 (0.410659) | 0.003602 / 0.000200 (0.003402) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023278 / 0.037411 (-0.014133) | 0.097317 / 0.014526 (0.082791) | 0.102669 / 0.176557 (-0.073888) | 0.168203 / 0.737135 (-0.568933) | 0.105205 / 0.296338 (-0.191133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424800 / 0.215209 (0.209591) | 4.228444 / 2.077655 (2.150790) | 1.895544 / 1.504120 (0.391424) | 1.698793 / 1.541195 (0.157598) | 1.717931 / 1.468490 (0.249441) | 0.702251 / 4.584777 (-3.882526) | 3.407013 / 3.745712 (-0.338699) | 2.784634 / 5.269862 (-2.485228) | 1.491317 / 4.565676 (-3.074359) | 0.082926 / 0.424275 (-0.341350) | 0.012320 / 0.007607 (0.004713) | 0.524188 / 0.226044 (0.298143) | 5.249798 / 2.268929 (2.980870) | 2.358953 / 55.444624 (-53.085672) | 1.985922 / 6.876477 (-4.890555) | 2.034293 / 2.142072 (-0.107779) | 0.815671 / 4.805227 (-3.989556) | 0.152583 / 6.500664 (-6.348081) | 0.066687 / 0.075469 (-0.008782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.210901 / 1.841788 (-0.630886) | 13.621765 / 8.074308 (5.547457) | 14.213215 / 10.191392 (4.021823) | 0.143346 / 0.680424 (-0.537078) | 0.016904 / 0.534201 (-0.517297) | 0.379795 / 0.579283 (-0.199489) | 0.381287 / 0.434364 (-0.053077) | 0.449086 / 0.540337 (-0.091251) | 0.538792 / 1.386936 (-0.848144) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006207 / 0.011353 (-0.005146) | 0.004404 / 0.011008 (-0.006604) | 0.076363 / 0.038508 (0.037854) | 0.027335 / 0.023109 (0.004226) | 0.370967 / 0.275898 (0.095069) | 0.401936 / 0.323480 (0.078456) | 0.004835 / 0.007986 (-0.003151) | 0.004559 / 0.004328 (0.000231) | 0.074964 / 0.004250 (0.070713) | 0.038254 / 0.037052 (0.001202) | 0.374799 / 0.258489 (0.116310) | 0.425191 / 0.293841 (0.131350) | 0.035290 / 0.128546 (-0.093256) | 0.011379 / 0.075646 (-0.064267) | 0.085911 / 0.419271 (-0.333360) | 0.043073 / 0.043533 (-0.000460) | 0.373557 / 0.255139 (0.118418) | 0.395179 / 0.283200 (0.111979) | 0.098602 / 0.141683 (-0.043081) | 1.467234 / 1.452155 (0.015079) | 1.571868 / 1.492716 (0.079152) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221848 / 0.018006 (0.203842) | 0.394943 / 0.000490 (0.394454) | 0.002983 / 0.000200 (0.002783) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024385 / 0.037411 (-0.013027) | 0.100087 / 0.014526 (0.085561) | 0.104897 / 0.176557 (-0.071660) | 0.156150 / 0.737135 (-0.580985) | 0.109113 / 0.296338 (-0.187226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441995 / 0.215209 (0.226786) | 4.415423 / 2.077655 (2.337769) | 2.148791 / 1.504120 (0.644671) | 1.947061 / 1.541195 (0.405866) | 1.954807 / 1.468490 (0.486317) | 0.690245 / 4.584777 (-3.894532) | 3.372766 / 3.745712 (-0.372946) | 1.851073 / 5.269862 (-3.418789) | 1.155558 / 4.565676 (-3.410118) | 0.082796 / 0.424275 (-0.341479) | 0.012845 / 0.007607 (0.005238) | 0.548173 / 0.226044 (0.322129) | 5.530984 / 2.268929 (3.262056) | 2.665360 / 55.444624 (-52.779264) | 2.324266 / 6.876477 (-4.552211) | 2.329397 / 2.142072 (0.187324) | 0.801481 / 4.805227 (-4.003746) | 0.152145 / 6.500664 (-6.348519) | 0.067915 / 0.075469 (-0.007554) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291488 / 1.841788 (-0.550299) | 13.912143 / 8.074308 (5.837835) | 12.975493 / 10.191392 (2.784101) | 0.129915 / 0.680424 (-0.550509) | 0.016516 / 0.534201 (-0.517685) | 0.386979 / 0.579283 (-0.192304) | 0.389163 / 0.434364 (-0.045201) | 0.443324 / 0.540337 (-0.097014) | 0.533744 / 1.386936 (-0.853192) |\n\n</details>\n</details>\n\n\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5867). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008635 / 0.011353 (-0.002717) | 0.006014 / 0.011008 (-0.004995) | 0.116314 / 0.038508 (0.077806) | 0.041113 / 0.023109 (0.018004) | 0.358564 / 0.275898 (0.082666) | 0.397547 / 0.323480 (0.074067) | 0.007012 / 0.007986 (-0.000974) | 0.004638 / 0.004328 (0.000310) | 0.086509 / 0.004250 (0.082259) | 0.056731 / 0.037052 (0.019678) | 0.358859 / 0.258489 (0.100370) | 0.425339 / 0.293841 (0.131498) | 0.041780 / 0.128546 (-0.086767) | 0.014203 / 0.075646 (-0.061443) | 0.398240 / 0.419271 (-0.021031) | 0.060180 / 0.043533 (0.016647) | 0.352887 / 0.255139 (0.097748) | 0.381793 / 0.283200 (0.098594) | 0.148578 / 0.141683 (0.006895) | 1.749483 / 1.452155 (0.297328) | 1.869765 / 1.492716 (0.377049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244435 / 0.018006 (0.226428) | 0.499545 / 0.000490 (0.499055) | 0.004576 / 0.000200 (0.004376) | 0.000147 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031163 / 0.037411 (-0.006249) | 0.131082 / 0.014526 (0.116556) | 0.137442 / 0.176557 (-0.039114) | 0.203783 / 0.737135 (-0.533352) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.503587 / 0.215209 (0.288378) | 5.011953 / 2.077655 (2.934299) | 2.366968 / 1.504120 (0.862848) | 2.130914 / 1.541195 (0.589719) | 2.243560 / 1.468490 (0.775070) | 0.856719 / 4.584777 (-3.728058) | 4.707445 / 3.745712 (0.961733) | 2.506166 / 5.269862 (-2.763696) | 1.590400 / 4.565676 (-2.975277) | 0.102075 / 0.424275 (-0.322200) | 0.014499 / 0.007607 (0.006892) | 0.624966 / 0.226044 (0.398922) | 6.197671 / 2.268929 (3.928742) | 2.898481 / 55.444624 (-52.546143) | 2.499590 / 6.876477 (-4.376886) | 2.649690 / 2.142072 (0.507617) | 1.012542 / 4.805227 (-3.792685) | 0.202833 / 6.500664 (-6.297831) | 0.078033 / 0.075469 (0.002564) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.448321 / 1.841788 (-0.393467) | 18.084909 / 8.074308 (10.010601) | 17.383027 / 10.191392 (7.191635) | 0.212167 / 0.680424 (-0.468256) | 0.020754 / 0.534201 (-0.513447) | 0.514653 / 0.579283 (-0.064630) | 0.543307 / 0.434364 (0.108944) | 0.653066 / 0.540337 (0.112728) | 0.745773 / 1.386936 (-0.641164) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008576 / 0.011353 (-0.002777) | 0.005834 / 0.011008 (-0.005174) | 0.089842 / 0.038508 (0.051334) | 0.040035 / 0.023109 (0.016926) | 0.449329 / 0.275898 (0.173431) | 0.471572 / 0.323480 (0.148092) | 0.006771 / 0.007986 (-0.001215) | 0.006129 / 0.004328 (0.001800) | 0.090370 / 0.004250 (0.086119) | 0.056924 / 0.037052 (0.019872) | 0.455134 / 0.258489 (0.196645) | 0.502670 / 0.293841 (0.208829) | 0.041689 / 0.128546 (-0.086857) | 0.014447 / 0.075646 (-0.061200) | 0.104528 / 0.419271 (-0.314744) | 0.055535 / 0.043533 (0.012003) | 0.450667 / 0.255139 (0.195528) | 0.453108 / 0.283200 (0.169908) | 0.119296 / 0.141683 (-0.022387) | 1.747359 / 1.452155 (0.295204) | 1.839421 / 1.492716 (0.346705) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314910 / 0.018006 (0.296904) | 0.495575 / 0.000490 (0.495085) | 0.054702 / 0.000200 (0.054503) | 0.000505 / 0.000054 (0.000450) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033991 / 0.037411 (-0.003420) | 0.133268 / 0.014526 (0.118742) | 0.142286 / 0.176557 (-0.034271) | 0.200562 / 0.737135 (-0.536573) | 0.147161 / 0.296338 (-0.149178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.520288 / 0.215209 (0.305079) | 5.227684 / 2.077655 (3.150029) | 2.553330 / 1.504120 (1.049210) | 2.324338 / 1.541195 (0.783143) | 2.406790 / 1.468490 (0.938300) | 0.850404 / 4.584777 (-3.734373) | 4.612156 / 3.745712 (0.866444) | 2.592546 / 5.269862 (-2.677316) | 1.708984 / 4.565676 (-2.856692) | 0.103751 / 0.424275 (-0.320524) | 0.014379 / 0.007607 (0.006772) | 0.634661 / 0.226044 (0.408616) | 6.344939 / 2.268929 (4.076010) | 3.179807 / 55.444624 (-52.264817) | 2.831856 / 6.876477 (-4.044621) | 2.866729 / 2.142072 (0.724656) | 0.994519 / 4.805227 (-3.810708) | 0.201566 / 6.500664 (-6.299098) | 0.078902 / 0.075469 (0.003433) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538738 / 1.841788 (-0.303049) | 18.746367 / 8.074308 (10.672059) | 16.504763 / 10.191392 (6.313371) | 0.197898 / 0.680424 (-0.482526) | 0.020469 / 0.534201 (-0.513732) | 0.529106 / 0.579283 (-0.050177) | 0.536891 / 0.434364 (0.102527) | 0.600947 / 0.540337 (0.060610) | 0.701713 / 1.386936 (-0.685223) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4070/comments | https://api.github.com/repos/huggingface/datasets/issues/4070/events | https://github.com/huggingface/datasets/pull/4070 | 1,186,810,205 | PR_kwDODunzps41VMYq | 4,070 | Create metric card for seqeval | [] | closed | false | null | 1 | 2022-03-30T18:08:01Z | 2022-04-01T19:02:58Z | 2022-04-01T18:57:25Z | null | Proposing metric card for seqeval. Not sure which values to report for Popular papers though. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4070/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4070/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4070.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4070",
"merged_at": "2022-04-01T18:57:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4070.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4070"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/143/comments | https://api.github.com/repos/huggingface/datasets/issues/143/events | https://github.com/huggingface/datasets/issues/143 | 619,457,641 | MDU6SXNzdWU2MTk0NTc2NDE= | 143 | ArrowTypeError in squad metrics | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | 1 | 2020-05-16T12:06:37Z | 2020-05-22T13:38:52Z | 2020-05-22T13:36:48Z | null | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references look like
```
predictions[0]
# {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
```
```
references[0]
# {'answers': [{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'}],
'id': '56be4db0acb8001400a502ec'}
```
These are structured as per the `squad_metric.compute` help string. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/143/timeline | null | completed | null | null | false | [
"There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/5067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5067/comments | https://api.github.com/repos/huggingface/datasets/issues/5067/events | https://github.com/huggingface/datasets/pull/5067 | 1,396,361,768 | PR_kwDODunzps5AI86d | 5,067 | Fix CONTRIBUTING once dataset scripts transferred to Hub | [] | closed | false | null | 1 | 2022-10-04T14:16:05Z | 2022-10-06T06:14:43Z | 2022-10-06T06:12:12Z | null | This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub:
- #4974
See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b
Additionally, this PR fixes the line separator that by some previous mistake was CRLF instead of LF. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5067/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5067.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5067",
"merged_at": "2022-10-06T06:12:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5067.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5067"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3746/comments | https://api.github.com/repos/huggingface/datasets/issues/3746/events | https://github.com/huggingface/datasets/pull/3746 | 1,141,612,810 | PR_kwDODunzps4zAS-C | 3,746 | Use the same seed to shuffle shards and metadata in streaming mode | [] | closed | false | null | 0 | 2022-02-17T17:06:31Z | 2022-02-23T15:00:59Z | 2022-02-23T15:00:58Z | null | When shuffling in streaming mode, those two entangled lists are shuffled independently. In this PR I changed this to shuffle the lists of same length with the exact same seed, in order for the files and metadata to still be aligned.
```python
gen_kwargs = {
"files": [os.path.join(data_dir, filename) for filename in all_files],
"metadata_files": [all_metadata[filename] for filename in all_files],
}
```
IMO this is important to avoid big but silent issues.
Fix https://github.com/huggingface/datasets/issues/3744 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3746/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3746.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3746",
"merged_at": "2022-02-23T15:00:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3746.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3746"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1293/comments | https://api.github.com/repos/huggingface/datasets/issues/1293/events | https://github.com/huggingface/datasets/pull/1293 | 759,360,113 | MDExOlB1bGxSZXF1ZXN0NTM0Mzc4OTQ0 | 1,293 | add hrenwac_para | [] | closed | false | null | 0 | 2020-12-08T11:16:41Z | 2020-12-08T11:34:47Z | 2020-12-08T11:34:38Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1293/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1293",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1293"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3353/comments | https://api.github.com/repos/huggingface/datasets/issues/3353/events | https://github.com/huggingface/datasets/issues/3353 | 1,068,173,783 | I_kwDODunzps4_qwnX | 3,353 | add one field "example_id", but I can't see it in the "comput_loss" function | [] | closed | false | null | 7 | 2021-12-01T09:35:09Z | 2021-12-01T16:02:39Z | 2021-12-01T16:02:39Z | null | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0],
[ 101, 2054, 2515, ..., 0, 0, 0],
[ 101, 2054, 2106, ..., 0, 0, 0],
...,
[ 101, 2339, 2001, ..., 0, 0, 0],
[ 101, 2054, 2515, ..., 0, 0, 0],
[ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], device='cuda:0')}
```
```
# This function preprocesses a question answering dataset, tokenizing the question and context text
# and finding the right offsets for the answer spans in the tokenized context (to use as labels).
# Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py
def prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None):
questions = [q.lstrip() for q in examples["question"]]
max_seq_length = tokenizer.model_max_length
# tokenize both questions and the corresponding context
# if the context length is longer than max_length, we split it to several
# chunks of max_length
tokenized_examples = tokenizer(
questions,
examples["context"],
truncation="only_second",
max_length=max_seq_length,
stride=min(max_seq_length // 2, 128),
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length"
)
# Since one example might give us several features if it has a long context,
# we need a map from a feature to its corresponding example.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position
# in the original context. This will help us compute the start_positions
# and end_positions to get the final answer string.
offset_mapping = tokenized_examples.pop("offset_mapping")
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
tokenized_examples["example_id"] = []
for i, offsets in enumerate(offset_mapping):
input_ids = tokenized_examples["input_ids"][i]
# We will label features not containing the answer the index of the CLS token.
cls_index = input_ids.index(tokenizer.cls_token_id)
sequence_ids = tokenized_examples.sequence_ids(i)
# from the feature idx to sample idx
sample_index = sample_mapping[i]
# get the answer for a feature
answers = examples["answers"][sample_index]
tokenized_examples["example_id"].append(examples["id"][sample_index])
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != 1:
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != 1:
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and
offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and \
offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(
token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
```
_Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/3333#issuecomment-983457161_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3353/timeline | null | completed | null | null | false | [
"Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.\r\n\r\nHowever I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the model), the data loader doesn't need to return it by default.\r\n\r\nHowever you can disable this behavior by setting `remove_unused_columns` to `False` to your training arguments. In this case in `compute_loss` you will get the full item with all the fields.\r\n\r\nNote that since the model doesn't take `example_id` as input, you will have to remove it from the inputs when `model(**inputs)` is called",
"Hi, I have set **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**, but the field doesn't been contained yet.\r\n```\r\ndef main():\r\n argp = HfArgumentParser(TrainingArguments)\r\n # The HfArgumentParser object collects command-line arguments into an object (and provides default values for unspecified arguments).\r\n # In particular, TrainingArguments has several keys that you'll need/want to specify (when you call run.py from the command line):\r\n # --do_train\r\n # When included, this argument tells the script to train a model.\r\n # See docstrings for \"--task\" and \"--dataset\" for how the training dataset is selected.\r\n # --do_eval\r\n # When included, this argument tells the script to evaluate the trained/loaded model on the validation split of the selected dataset.\r\n # --per_device_train_batch_size <int, default=8>\r\n # This is the training batch size.\r\n # If you're running on GPU, you should try to make this as large as you can without getting CUDA out-of-memory errors.\r\n # For reference, with --max_length=128 and the default ELECTRA-small model, a batch size of 32 should fit in 4gb of GPU memory.\r\n # --num_train_epochs <float, default=3.0>\r\n # How many passes to do through the training data.\r\n # --output_dir <path>\r\n # Where to put the trained model checkpoint(s) and any eval predictions.\r\n # *This argument is required*.\r\n\r\n argp.add_argument('--model', type=str,\r\n default='google/electra-small-discriminator',\r\n help=\"\"\"This argument specifies the base model to fine-tune.\r\n This should either be a HuggingFace model ID (see https://huggingface.co/models)\r\n or a path to a saved model checkpoint (a folder containing config.json and pytorch_model.bin).\"\"\")\r\n argp.add_argument('--task', type=str, choices=['nli', 'qa'], required=True,\r\n help=\"\"\"This argument specifies which task to train/evaluate on.\r\n Pass \"nli\" for natural language inference or \"qa\" for question answering.\r\n By default, \"nli\" will use the SNLI dataset, and \"qa\" will use the SQuAD dataset.\"\"\")\r\n argp.add_argument('--dataset', type=str, default=None,\r\n help=\"\"\"This argument overrides the default dataset used for the specified task.\"\"\")\r\n argp.add_argument('--max_length', type=int, default=128,\r\n help=\"\"\"This argument limits the maximum sequence length used during training/evaluation.\r\n Shorter sequence lengths need less memory and computation time, but some examples may end up getting truncated.\"\"\")\r\n argp.add_argument('--max_train_samples', type=int, default=None,\r\n help='Limit the number of examples to train on.')\r\n argp.add_argument('--max_eval_samples', type=int, default=None,\r\n help='Limit the number of examples to evaluate on.')\r\n\r\n argp.remove_unused_columns = False\r\n training_args, args = argp.parse_args_into_dataclasses()\r\n args.remove_unused_columns=False\r\n training_args.remove_unused_columns=False\r\n```\r\n\r\n\r\n```\r\n**************** train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n**************** train_dataset_featurized: Dataset({\r\n features: ['attention_mask', 'end_positions', 'input_ids', 'start_positions', 'token_type_ids'],\r\n num_rows: 87714\r\n})\r\n```",
"Hi, I print the value, all are set to False, but don't work.\r\n```\r\n********************* training_args: TrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_find_unused_parameters=None,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_steps=None,\r\nevaluation_strategy=IntervalStrategy.NO,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\ngradient_accumulation_steps=1,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nignore_data_skip=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=-1,\r\nlog_level_replica=-1,\r\nlog_on_each_node=True,\r\nlogging_dir=./re_trained_model/runs/Dec01_14-15-08_399b9290604c,\r\nlogging_first_step=False,\r\nlogging_steps=500,\r\nlogging_strategy=IntervalStrategy.STEPS,\r\nlr_scheduler_type=SchedulerType.LINEAR,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noutput_dir=./re_trained_model,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=8,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=re_trained_model,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=None,\r\nremove_unused_columns=False,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=./re_trained_model,\r\nsave_on_each_node=False,\r\nsave_steps=500,\r\nsave_strategy=IntervalStrategy.STEPS,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\n)\r\n```\r\n```\r\n********************* args: Namespace(dataset='squad', max_eval_samples=None, max_length=128, max_train_samples=None, model='google/electra-small-discriminator', remove_unused_columns=False, task='qa')\r\n2021-12-01 14:15:10,048 - WARNING - datasets.builder - Reusing dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\nSome weights of the model checkpoint at google/electra-small-discriminator were not used when initializing ElectraForQuestionAnswering: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.weight', 'discriminator_predictions.dense.bias']\r\n- This IS expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of ElectraForQuestionAnswering were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nPreprocessing data... (this takes a little bit, should only happen once per dataset)\r\n```",
"Hmmm, it might be because the default data collator removes all the fields with `string` type:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4c0dd199c8305903564c2edeae23d294edd4b321/src/transformers/data/data_collator.py#L107-L112\r\n\r\nI guess you also need a custom data collator that doesn't remove them.",
"can you give a tutorial about how to do this?",
"I overwrite **get_train_dataloader**, and remove **_remove_unused_columns**, but it doesn't work.\r\n\r\n```\r\n def get_train_dataloader(self) -> DataLoader:\r\n \"\"\"\r\n Returns the training :class:`~torch.utils.data.DataLoader`.\r\n\r\n Will use no sampler if :obj:`self.train_dataset` does not implement :obj:`__len__`, a random sampler (adapted\r\n to distributed training if necessary) otherwise.\r\n\r\n Subclass and override this method if you want to inject some custom behavior.\r\n \"\"\"\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n\r\n train_dataset = self.train_dataset\r\n # if is_datasets_available() and isinstance(train_dataset, datasets.Dataset):\r\n # train_dataset = self._remove_unused_columns(train_dataset, description=\"training\")\r\n\r\n if isinstance(train_dataset, torch.utils.data.IterableDataset):\r\n if self.args.world_size > 1:\r\n train_dataset = IterableDatasetShard(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_processes=self.args.world_size,\r\n process_index=self.args.process_index,\r\n )\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n collate_fn=self.data_collator,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n\r\n train_sampler = self._get_train_sampler()\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler,\r\n collate_fn=self.data_collator,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n```",
"Hi, it works now, thank you.\r\n1. **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**\r\n2. overwrite **get_train_dataloader**, and remove **_remove_unused_columns**\r\n3. add new fields, and can be got in **inputs**. "
] |
https://api.github.com/repos/huggingface/datasets/issues/5491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5491/comments | https://api.github.com/repos/huggingface/datasets/issues/5491/events | https://github.com/huggingface/datasets/pull/5491 | 1,566,235,012 | PR_kwDODunzps5JA9OD | 5,491 | [MINOR] Typo | [] | closed | false | null | 2 | 2023-02-01T14:39:39Z | 2023-02-02T07:42:28Z | 2023-02-02T07:35:14Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5491/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5491",
"merged_at": "2023-02-02T07:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5491"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008726 / 0.011353 (-0.002627) | 0.004589 / 0.011008 (-0.006419) | 0.101078 / 0.038508 (0.062570) | 0.029732 / 0.023109 (0.006622) | 0.298309 / 0.275898 (0.022411) | 0.367800 / 0.323480 (0.044320) | 0.007025 / 0.007986 (-0.000961) | 0.003513 / 0.004328 (-0.000815) | 0.079531 / 0.004250 (0.075281) | 0.035588 / 0.037052 (-0.001465) | 0.307850 / 0.258489 (0.049361) | 0.351603 / 0.293841 (0.057762) | 0.033593 / 0.128546 (-0.094954) | 0.011669 / 0.075646 (-0.063977) | 0.323025 / 0.419271 (-0.096246) | 0.042047 / 0.043533 (-0.001486) | 0.300565 / 0.255139 (0.045426) | 0.329362 / 0.283200 (0.046163) | 0.089001 / 0.141683 (-0.052682) | 1.472799 / 1.452155 (0.020644) | 1.488902 / 1.492716 (-0.003814) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012491 / 0.018006 (-0.005515) | 0.408245 / 0.000490 (0.407755) | 0.003878 / 0.000200 (0.003678) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023698 / 0.037411 (-0.013713) | 0.100442 / 0.014526 (0.085916) | 0.108233 / 0.176557 (-0.068323) | 0.145308 / 0.737135 (-0.591827) | 0.113121 / 0.296338 (-0.183218) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420490 / 0.215209 (0.205281) | 4.179838 / 2.077655 (2.102183) | 2.156007 / 1.504120 (0.651887) | 1.911358 / 1.541195 (0.370163) | 1.867961 / 1.468490 (0.399471) | 0.685254 / 4.584777 (-3.899523) | 3.382386 / 3.745712 (-0.363326) | 3.285657 / 5.269862 (-1.984205) | 1.693878 / 4.565676 (-2.871798) | 0.081680 / 0.424275 (-0.342595) | 0.012182 / 0.007607 (0.004575) | 0.526021 / 0.226044 (0.299977) | 5.276217 / 2.268929 (3.007289) | 2.541518 / 55.444624 (-52.903106) | 2.313452 / 6.876477 (-4.563025) | 2.340000 / 2.142072 (0.197928) | 0.807099 / 4.805227 (-3.998128) | 0.147587 / 6.500664 (-6.353077) | 0.064280 / 0.075469 (-0.011189) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223466 / 1.841788 (-0.618321) | 13.911365 / 8.074308 (5.837057) | 14.261550 / 10.191392 (4.070158) | 0.135922 / 0.680424 (-0.544502) | 0.028832 / 0.534201 (-0.505368) | 0.393142 / 0.579283 (-0.186141) | 0.400507 / 0.434364 (-0.033857) | 0.471792 / 0.540337 (-0.068546) | 0.558278 / 1.386936 (-0.828658) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006644 / 0.011353 (-0.004709) | 0.004531 / 0.011008 (-0.006478) | 0.076285 / 0.038508 (0.037777) | 0.027249 / 0.023109 (0.004140) | 0.343137 / 0.275898 (0.067239) | 0.378498 / 0.323480 (0.055018) | 0.004950 / 0.007986 (-0.003036) | 0.003422 / 0.004328 (-0.000907) | 0.075662 / 0.004250 (0.071412) | 0.039692 / 0.037052 (0.002640) | 0.343402 / 0.258489 (0.084913) | 0.385067 / 0.293841 (0.091226) | 0.032382 / 0.128546 (-0.096164) | 0.011577 / 0.075646 (-0.064069) | 0.085534 / 0.419271 (-0.333738) | 0.052139 / 0.043533 (0.008606) | 0.342176 / 0.255139 (0.087037) | 0.367298 / 0.283200 (0.084098) | 0.096088 / 0.141683 (-0.045595) | 1.470770 / 1.452155 (0.018615) | 1.567316 / 1.492716 (0.074600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217664 / 0.018006 (0.199657) | 0.397807 / 0.000490 (0.397317) | 0.006864 / 0.000200 (0.006664) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025064 / 0.037411 (-0.012348) | 0.100906 / 0.014526 (0.086380) | 0.107444 / 0.176557 (-0.069113) | 0.143679 / 0.737135 (-0.593457) | 0.112460 / 0.296338 (-0.183879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442634 / 0.215209 (0.227425) | 4.410687 / 2.077655 (2.333032) | 2.067445 / 1.504120 (0.563325) | 1.860569 / 1.541195 (0.319374) | 1.943523 / 1.468490 (0.475033) | 0.694585 / 4.584777 (-3.890192) | 3.375906 / 3.745712 (-0.369806) | 3.483334 / 5.269862 (-1.786528) | 1.437700 / 4.565676 (-3.127977) | 0.083138 / 0.424275 (-0.341137) | 0.012979 / 0.007607 (0.005372) | 0.536414 / 0.226044 (0.310370) | 5.379872 / 2.268929 (3.110943) | 2.517907 / 55.444624 (-52.926717) | 2.164772 / 6.876477 (-4.711705) | 2.212839 / 2.142072 (0.070767) | 0.799675 / 4.805227 (-4.005553) | 0.150253 / 6.500664 (-6.350411) | 0.067033 / 0.075469 (-0.008436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295592 / 1.841788 (-0.546196) | 14.372932 / 8.074308 (6.298623) | 13.618423 / 10.191392 (3.427031) | 0.141212 / 0.680424 (-0.539212) | 0.016933 / 0.534201 (-0.517268) | 0.385664 / 0.579283 (-0.193619) | 0.386919 / 0.434364 (-0.047445) | 0.477022 / 0.540337 (-0.063315) | 0.565158 / 1.386936 (-0.821778) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1152/comments | https://api.github.com/repos/huggingface/datasets/issues/1152/events | https://github.com/huggingface/datasets/pull/1152 | 757,640,506 | MDExOlB1bGxSZXF1ZXN0NTMyOTg4MjMw | 1,152 | hindi discourse analysis dataset commit | [] | closed | false | null | 9 | 2020-12-05T09:24:01Z | 2020-12-14T19:44:48Z | 2020-12-14T19:44:48Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1152/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1152.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1152",
"merged_at": "2020-12-14T19:44:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1152.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1152"
} | true | [
"That's a great dataset to have! We need a couple more things to be good to go:\r\n- you should `make style` and `flake8 datasets` before pushing to make the code quality check happy :) \r\n- the dataset will need some dummy data which you should be able to auto-generate and test locally: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n- there's some good information in your current README, but we need the format to follow the template [here](https://github.com/huggingface/datasets/blob/master/templates/README.md) and to have YAML tags at the top, as described in the guide: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nLEt us know if you need any help!",
"Hi @yjernite \r\nI was successfully able to generate the dataset_info.json file using the command \r\npython datasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n\r\nBut unfortunately, could not generate the dummy data\r\n\r\nWhile running the command \r\npython datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate\r\nI got an error as \r\n\r\nValueError: Couldn't parse columns ['0', '1', '2', '3', '4', ......, '9982']. Maybe specify which json field must be used to read the data with --json_field <my_field>.\r\n\r\nThe thing is the dataset I am trying to upload is of the format \r\n{\r\n '0': {'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'},\r\n '1': {'Story_no': story_no, 'Sentence': sentence, 'Discourse Mode': discourse_mode},\r\n .......,\r\n '9982': {'Story_no': story_no, 'Sentence': sentence, 'Discourse Mode': discourse_mode}\r\n}\r\n\r\nCan you please suggest any errors I am making in the _generate_examples method?\r\n\r\nThanks!",
"The dummy data generator doesn't support this kind of json format yet.\r\nCan you create the dummy data manually please ? You can get the instructions by running the \r\n```\r\ndatasets-cli dummy_data ./datasets/dataset_name\r\n```\r\ncommand.",
"Hi, I created the dummy data manually but the tests are still failing it seems.\r\nCan you suggest the format of JSON which is supported by dummy data generator?\r\nI will have to modify my _generate_examples method accordingly.\r\nPlease advice on the same.\r\nThanks much.\r\n",
"Can you run `make style` to format the code for the CI please ?\r\n\r\nAlso about the dummy data, here is how to generate them:\r\n\r\nWe need a dummy_data.zip file in ./datasets/hindiDiscourse/dummy/1.0.0 (or replace hindiDiscourse by hindi_discourse since we have to rename the folder anyway)\r\nTo create the zip file, first go in this directory and create a folder named dummy_data.\r\nThen inside the dummy_data folder create a file `discourse_dataset.json` and fill it with something like 5 examples.\r\nFinally zip the dummy_data folder to end up with the dummy_data.zip file\r\n\r\nOnce it's done you can check if the dummy data test passes with \r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_hindi_discourse\r\n```\r\n\r\nIf it passes you can then remove the dummy_data folder to keep only the dummy_data.zip file",
"Hi @duttahritwik did you manage to make the dummy data ?\r\nFeel free to ping me if you have questions or if we can help",
"The error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one",
"Ci is green on master :) ",
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1482/comments | https://api.github.com/repos/huggingface/datasets/issues/1482/events | https://github.com/huggingface/datasets/pull/1482 | 762,686,820 | MDExOlB1bGxSZXF1ZXN0NTM3MjA4NDk3 | 1,482 | Adding medical database chinese and english | [] | closed | false | null | 5 | 2020-12-11T17:50:39Z | 2021-02-16T05:28:36Z | 2020-12-15T18:23:53Z | null | Error in creating dummy dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1482/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1482",
"merged_at": "2020-12-15T18:23:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1482"
} | true | [
"Let me know it that helps !\r\nAlso feel free to ping me if you have other questions or if I can help you.",
"Now I am getting an Assertion Error!\r\n\r\n",
"All tests have passed. However, PyTest is still failing with the `AssertionError` as before. Also _datasets_info.json_ actually does not seem to provide much info. Please review and let me know what has to be improved.\r\n\r\nThanks!",
"[PR-1503](https://github.com/huggingface/datasets/pull/1503) on the COVID dialog dataset from the same University has similar features. I kept it separate because it is only on COVID qa. Also consists of only single files per language. Kindly let me know if it has to be added as two more configurations to this existing dataset itself. If it has to be added, then can I still freeze the file names like in PR-1503?",
"It's ok to have them separate"
] |
https://api.github.com/repos/huggingface/datasets/issues/5500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5500/comments | https://api.github.com/repos/huggingface/datasets/issues/5500/events | https://github.com/huggingface/datasets/issues/5500 | 1,569,257,240 | I_kwDODunzps5diPcY | 5,500 | WMT19 custom download checksum error | [] | closed | false | null | 1 | 2023-02-03T05:45:37Z | 2023-02-03T05:52:56Z | 2023-02-03T05:52:56Z | null | ### Describe the bug
I use the following scripts to download data from WMT19:
```python
import datasets
from datasets import inspect_dataset, load_dataset_builder
from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS
## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3
if __name__ == '__main__':
dev_subsets,train_subsets = [],[]
for subset in _TRAIN_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
train_subsets.append(subset.name)
for subset in _DEV_SUBSETS:
if subset.target=='en' and 'de' in subset.sources:
dev_subsets.append(subset.name)
inspect_dataset("wmt19", "./wmt19")
builder = load_dataset_builder(
"./wmt19/wmt_utils.py",
language_pair=("de", "en"),
subsets={
datasets.Split.TRAIN: train_subsets,
datasets.Split.VALIDATION: dev_subsets,
},
)
builder.download_and_prepare()
ds = builder.as_dataset()
ds.to_json("../data/wmt19/ende/data.json")
```
And I got the following error:
```
Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s]
File "draft.py", line 26, in <module>
builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s]
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s]
datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'}
```
### Steps to reproduce the bug
see above
### Expected behavior
download data successfully
### Environment info
datasets==2.1.0
python==3.8
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5500/timeline | null | completed | null | null | false | [
"I update the `datatsets` version and it works."
] |
https://api.github.com/repos/huggingface/datasets/issues/4409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4409/comments | https://api.github.com/repos/huggingface/datasets/issues/4409/events | https://github.com/huggingface/datasets/pull/4409 | 1,249,083,179 | PR_kwDODunzps44fxiH | 4,409 | Update: add using pcm bytes (#4323) | [] | closed | false | null | 16 | 2022-05-26T04:26:36Z | 2022-07-07T13:27:29Z | 2022-07-07T13:16:09Z | null | first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4409/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"merged_at": "2022-07-07T13:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409"
} | true | [
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio META information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n",
"There is no \"main\" function in test scripts :) To run a test script you must use the `pytest` command:\r\n```\r\npytest tests/features/test_audio.py\r\n```\r\n\r\nto run only one function you can also do\r\n```\r\npytest tests/features/test_audio.py::test_audio_feature_type_to_arrow\r\n```\r\nfor example",
"@lhoestq\r\nmaybe, if i write test code, i have to commit test_audio.py and send pr?\r\nbecause, we need to keep `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` method after my pr merged?",
"You can add your tests in this PR with the other changes you did",
"@lhoestq \r\ntest complete & commit my test_audio.py\r\n\r\nAND, some change in my code.\r\n\r\naudio.py\r\ni think \"sampling_rate\" is already Audio object initial variable. so, we don`t have to use input parameter.\r\n\r\ntest_audio.py\r\nwe can check \"PCM\" file to path (exactly, extenstion)\r\nso, test case has to know `path`. if only have `bytes`, we don`t know that is \"PCM\" or not",
"@lhoestq\r\nand, why circleci raised exception?\r\nmaybe, [repo](https://huggingface.co/api/datasets/lhoestq/_dummy?full=true) url is not found!\r\nPLZ, CHK!",
"@lhoestq\r\nhello????",
"@lhoestq \r\ntest_audio.py\r\nif we don`t use path in pcm, test-case need to be changed\r\nso, we check path just None",
"i'm merge branch already and `multiprocess` in `setup.py` but circleci error only win version\r\n\r\nhow can i fixed it?",
"@lhoestq thx for comment!\r\ntest_audio.py test complete. it runs sucessfully\r\nand, self.get(\"sampling_rate\") -> value.get(\"sampling_rate\") changed\r\n\r\nand, some comment is not agreed to me, plz check my sub comment!",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2961/comments | https://api.github.com/repos/huggingface/datasets/issues/2961/events | https://github.com/huggingface/datasets/pull/2961 | 1,006,453,781 | PR_kwDODunzps4sPTXV | 2,961 | Fix CI doc build | [] | closed | false | null | 0 | 2021-09-24T13:13:28Z | 2021-09-24T13:18:07Z | 2021-09-24T13:18:07Z | null | Pin `fsspec`.
Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1'
Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2961/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2961.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2961",
"merged_at": "2021-09-24T13:18:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2961.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2961"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2342/comments | https://api.github.com/repos/huggingface/datasets/issues/2342/events | https://github.com/huggingface/datasets/pull/2342 | 882,981,420 | MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3 | 2,342 | Docs - CER above 1 | [] | closed | false | null | 0 | 2021-05-09T23:41:00Z | 2021-05-10T13:34:00Z | 2021-05-10T13:34:00Z | null | CER can actually be greater than 1. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2342/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2342.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2342",
"merged_at": "2021-05-10T13:34:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2342.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2342"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/608/comments | https://api.github.com/repos/huggingface/datasets/issues/608/events | https://github.com/huggingface/datasets/issues/608 | 698,291,156 | MDU6SXNzdWU2OTgyOTExNTY= | 608 | Don't use the old NYU GLUE dataset URLs | [] | closed | false | null | 1 | 2020-09-10T17:47:02Z | 2020-09-16T06:53:18Z | 2020-09-16T06:53:18Z | null | NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR?
See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/1112 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/608/timeline | null | completed | null | null | false | [
"Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !"
] |
https://api.github.com/repos/huggingface/datasets/issues/283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/283/comments | https://api.github.com/repos/huggingface/datasets/issues/283/events | https://github.com/huggingface/datasets/issues/283 | 641,270,439 | MDU6SXNzdWU2NDEyNzA0Mzk= | 283 | Consistent formatting of citations | [] | closed | false | null | 0 | 2020-06-18T14:48:45Z | 2020-06-22T17:30:46Z | 2020-06-22T17:30:46Z | null | The citations are all of a different format, some have "```" and have text inside, others are proper bibtex.
Can we make it so that they all are proper citations, i.e. parse by the bibtex spec:
https://bibtexparser.readthedocs.io/en/master/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/283/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3246/comments | https://api.github.com/repos/huggingface/datasets/issues/3246/events | https://github.com/huggingface/datasets/pull/3246 | 1,049,662,746 | PR_kwDODunzps4uVvaW | 3,246 | [tiny] fix typo in stream docs | [] | closed | false | null | 0 | 2021-11-10T10:40:02Z | 2021-11-10T11:10:39Z | 2021-11-10T11:10:39Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3246/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3246.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3246",
"merged_at": "2021-11-10T11:10:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3246.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3246"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3170/comments | https://api.github.com/repos/huggingface/datasets/issues/3170/events | https://github.com/huggingface/datasets/pull/3170 | 1,037,601,926 | PR_kwDODunzps4twDUo | 3,170 | Preserve ordering in `zip_dict` | [] | closed | false | null | 0 | 2021-10-27T16:07:30Z | 2021-10-29T13:09:37Z | 2021-10-29T13:09:37Z | null | Replace `set` with the `unique_values` generator in `zip_dict`.
This PR fixes the problem with the different ordering of the example keys across different Python sessions caused by the `zip_dict` call in `Features.decode_example`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3170/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3170.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3170",
"merged_at": "2021-10-29T13:09:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3170.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3170"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3462/comments | https://api.github.com/repos/huggingface/datasets/issues/3462/events | https://github.com/huggingface/datasets/issues/3462 | 1,085,049,661 | I_kwDODunzps5ArIs9 | 3,462 | Update swahili_news dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2021-12-20T17:44:01Z | 2021-12-21T06:24:02Z | 2021-12-21T06:24:01Z | null | Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203.
## Adding a Dataset
- **Name:** swahili_news
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Related to:
- bigscience-workshop/data_tooling#107
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3462/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4316/comments | https://api.github.com/repos/huggingface/datasets/issues/4316/events | https://github.com/huggingface/datasets/pull/4316 | 1,232,681,207 | PR_kwDODunzps43p1Za | 4,316 | Support passing config_kwargs to CLI run_beam | [] | closed | false | null | 1 | 2022-05-11T13:53:37Z | 2022-05-11T14:36:49Z | 2022-05-11T14:28:31Z | null | This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass:
```
--date 20220501 --language ca
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4316/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4316",
"merged_at": "2022-05-11T14:28:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4316"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6074/comments | https://api.github.com/repos/huggingface/datasets/issues/6074/events | https://github.com/huggingface/datasets/pull/6074 | 1,822,299,128 | PR_kwDODunzps5Wb8O_ | 6,074 | Misc doc improvements | [] | closed | false | null | 3 | 2023-07-26T12:20:54Z | 2023-07-27T16:16:28Z | 2023-07-27T16:16:02Z | null | Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6074/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6074",
"merged_at": "2023-07-27T16:16:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6074"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006616 / 0.011353 (-0.004737) | 0.003915 / 0.011008 (-0.007093) | 0.083271 / 0.038508 (0.044763) | 0.072595 / 0.023109 (0.049485) | 0.307224 / 0.275898 (0.031326) | 0.337244 / 0.323480 (0.013764) | 0.005296 / 0.007986 (-0.002690) | 0.003325 / 0.004328 (-0.001003) | 0.064589 / 0.004250 (0.060339) | 0.056369 / 0.037052 (0.019316) | 0.310829 / 0.258489 (0.052340) | 0.345563 / 0.293841 (0.051722) | 0.030551 / 0.128546 (-0.097995) | 0.008519 / 0.075646 (-0.067127) | 0.286368 / 0.419271 (-0.132903) | 0.052498 / 0.043533 (0.008966) | 0.308735 / 0.255139 (0.053596) | 0.329234 / 0.283200 (0.046034) | 0.022588 / 0.141683 (-0.119095) | 1.453135 / 1.452155 (0.000981) | 1.525956 / 1.492716 (0.033239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181410) | 0.454621 / 0.000490 (0.454131) | 0.004928 / 0.000200 (0.004728) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028436 / 0.037411 (-0.008975) | 0.083722 / 0.014526 (0.069196) | 0.095162 / 0.176557 (-0.081395) | 0.153434 / 0.737135 (-0.583702) | 0.099480 / 0.296338 (-0.196859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384647 / 0.215209 (0.169438) | 3.838406 / 2.077655 (1.760751) | 1.891267 / 1.504120 (0.387148) | 1.751432 / 1.541195 (0.210238) | 1.737443 / 1.468490 (0.268953) | 0.487758 / 4.584777 (-4.097019) | 3.635925 / 3.745712 (-0.109787) | 5.208718 / 5.269862 (-0.061144) | 3.029374 / 4.565676 (-1.536302) | 0.057613 / 0.424275 (-0.366662) | 0.007177 / 0.007607 (-0.000430) | 0.455596 / 0.226044 (0.229552) | 4.559969 / 2.268929 (2.291040) | 2.325321 / 55.444624 (-53.119303) | 2.034924 / 6.876477 (-4.841552) | 2.163869 / 2.142072 (0.021796) | 0.583477 / 4.805227 (-4.221750) | 0.132870 / 6.500664 (-6.367795) | 0.059618 / 0.075469 (-0.015851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263751 / 1.841788 (-0.578037) | 19.740004 / 8.074308 (11.665696) | 14.410980 / 10.191392 (4.219588) | 0.170367 / 0.680424 (-0.510057) | 0.018225 / 0.534201 (-0.515976) | 0.390101 / 0.579283 (-0.189182) | 0.404298 / 0.434364 (-0.030066) | 0.455295 / 0.540337 (-0.085043) | 0.621179 / 1.386936 (-0.765757) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006580 / 0.011353 (-0.004773) | 0.004078 / 0.011008 (-0.006930) | 0.065842 / 0.038508 (0.027334) | 0.074494 / 0.023109 (0.051385) | 0.403644 / 0.275898 (0.127746) | 0.430204 / 0.323480 (0.106724) | 0.005343 / 0.007986 (-0.002643) | 0.003366 / 0.004328 (-0.000963) | 0.064858 / 0.004250 (0.060607) | 0.056252 / 0.037052 (0.019200) | 0.412556 / 0.258489 (0.154067) | 0.434099 / 0.293841 (0.140258) | 0.031518 / 0.128546 (-0.097028) | 0.008543 / 0.075646 (-0.067104) | 0.071658 / 0.419271 (-0.347613) | 0.049962 / 0.043533 (0.006430) | 0.398511 / 0.255139 (0.143372) | 0.415908 / 0.283200 (0.132708) | 0.025011 / 0.141683 (-0.116672) | 1.492350 / 1.452155 (0.040195) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204971 / 0.018006 (0.186964) | 0.439965 / 0.000490 (0.439475) | 0.002071 / 0.000200 (0.001872) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031673 / 0.037411 (-0.005738) | 0.087529 / 0.014526 (0.073004) | 0.099882 / 0.176557 (-0.076675) | 0.156994 / 0.737135 (-0.580141) | 0.101421 / 0.296338 (-0.194918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407480 / 0.215209 (0.192271) | 4.069123 / 2.077655 (1.991468) | 2.081288 / 1.504120 (0.577169) | 1.920367 / 1.541195 (0.379172) | 1.981053 / 1.468490 (0.512563) | 0.481995 / 4.584777 (-4.102782) | 3.546486 / 3.745712 (-0.199226) | 5.133150 / 5.269862 (-0.136712) | 3.056444 / 4.565676 (-1.509232) | 0.056650 / 0.424275 (-0.367625) | 0.007746 / 0.007607 (0.000139) | 0.490891 / 0.226044 (0.264847) | 4.902160 / 2.268929 (2.633232) | 2.564726 / 55.444624 (-52.879899) | 2.234988 / 6.876477 (-4.641489) | 2.387656 / 2.142072 (0.245583) | 0.576315 / 4.805227 (-4.228912) | 0.132065 / 6.500664 (-6.368599) | 0.060728 / 0.075469 (-0.014741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370568 / 1.841788 (-0.471220) | 19.883159 / 8.074308 (11.808851) | 14.442066 / 10.191392 (4.250674) | 0.150119 / 0.680424 (-0.530305) | 0.018359 / 0.534201 (-0.515842) | 0.394128 / 0.579283 (-0.185155) | 0.411697 / 0.434364 (-0.022667) | 0.460580 / 0.540337 (-0.079757) | 0.608490 / 1.386936 (-0.778446) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"merging now if you don't mind - this way I can make a patch release"
] |
https://api.github.com/repos/huggingface/datasets/issues/732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/732/comments | https://api.github.com/repos/huggingface/datasets/issues/732/events | https://github.com/huggingface/datasets/pull/732 | 721,359,448 | MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy | 732 | dataset(wlasl): initial loading script | [] | closed | false | null | 2 | 2020-10-14T11:01:42Z | 2021-03-23T06:19:43Z | 2021-03-23T06:19:43Z | null | takes like 9-10 hours to download all of the videos for the dataset, but it does finish :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/732/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/732.diff",
"html_url": "https://github.com/huggingface/datasets/pull/732",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/732.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/732"
} | true | [
"Followup: \r\nFrom the info in https://github.com/huggingface/datasets/pull/722, I probably should load the videos as array of frames directly into the database. \r\nThis will make the dataset generation time very long, but will make working with the dataset much easier.",
"When I run:\r\n```\r\npython datasets-cli dummy_data datasets/wlasl\r\n```\r\n\r\nI get:\r\n```\r\nChecking datasets/wlasl/wlasl.py for additional imports. \r\nFound main folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl \r\nFound specific version folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786 \r\nFound script file from datasets/wlasl/wlasl.py to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.py \r\nFound dataset infos file from datasets/wlasl/dataset_infos.json to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/dataset_infos.json \r\nFound metadata file for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.json \r\nUsing custom data configuration default \r\nLoading Dataset Infos from /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\r\nCreating dummy folder structure for datasets/wlasl/dummy/0.3.0... \r\nDataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data. \r\nTraceback (most recent call last): \r\nFile \"datasets-cli\", line 36, in \r\nservice.run() File \"/home/nlp/amit/anaconda2/envs/meta-scholar/lib/python3.7/site-packages/datasets-1.1.2-py3.7.egg/datasets/commands/dummy_data.py\", line 73, in run \r\nfor split in generator_splits: \r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3026/comments | https://api.github.com/repos/huggingface/datasets/issues/3026/events | https://github.com/huggingface/datasets/pull/3026 | 1,016,067,794 | PR_kwDODunzps4srtyc | 3,026 | added arxiv paper inswiss_judgment_prediction dataset card | [] | closed | false | null | 0 | 2021-10-05T09:02:01Z | 2021-10-08T16:01:44Z | 2021-10-08T16:01:24Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3026/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3026.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3026",
"merged_at": "2021-10-08T16:01:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3026.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3026"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3384/comments | https://api.github.com/repos/huggingface/datasets/issues/3384/events | https://github.com/huggingface/datasets/pull/3384 | 1,071,594,165 | PR_kwDODunzps4vaNwL | 3,384 | Adding mMARCO dataset | [] | closed | false | null | 0 | 2021-12-05T23:59:11Z | 2021-12-12T15:27:36Z | 2021-12-12T15:27:36Z | null | We are adding mMARCO dataset to HuggingFace datasets repo.
This way, all the languages covered in the translation are available in a easy way. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3384/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3384/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3384.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3384",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3384.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3384"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3044/comments | https://api.github.com/repos/huggingface/datasets/issues/3044/events | https://github.com/huggingface/datasets/issues/3044 | 1,020,869,778 | I_kwDODunzps482TyS | 3,044 | Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 3 | 2021-10-08T09:07:10Z | 2022-09-07T21:01:36Z | null | null | ## Describe the bug
Caching does not work when using `Dataset.map()` with:
1. a function that cannot be deterministically fingerprinted
2. `num_proc>1`
3. using a custom fingerprint set with the argument `new_fingerprint`.
This means that the dataset will be mapped with the function for each and every call, which does not happen if `num_proc==1`. In that case (`num_proc==1`) subsequent calls will load the transformed dataset from the cache, which is the expected behaviour. The example can easily be translated into a unit test.
I have a fix and will submit a pull request asap.
## Steps to reproduce the bug
```python
import hashlib
import json
import os
from typing import Dict, Any
import numpy as np
from datasets import load_dataset, Dataset
Batch = Dict[str, Any]
filename = 'example.json'
class Transformation():
"""A transformation with a random state that cannot be fingerprinted"""
def __init__(self):
self.state = np.random.random()
def __call__(self, batch: Batch) -> Batch:
batch['x'] = [np.random.random() for _ in batch['x']]
return batch
def generate_dataset():
"""generate a simple dataset"""
rgn = np.random.RandomState(24)
data = {
'data': [{'x': float(y), 'y': -float(y)} for y in
rgn.random(size=(1000,))]}
if not os.path.exists(filename):
with open(filename, 'w') as f:
f.write(json.dumps(data))
return filename
def process_dataset_with_cache(num_proc=1, remove_cache=False,
cache_expected_to_exist=False):
# load the generated dataset
dset: Dataset = next(
iter(load_dataset('json', data_files=filename, field='data').values()))
new_fingerprint = hashlib.md5("static-id".encode("utf8")).hexdigest()
# get the expected cached path
cache_path = dset._get_cache_file_path(new_fingerprint)
if remove_cache and os.path.exists(cache_path):
os.remove(cache_path)
# check that the cache exists, and print a statement
# if was actually expected to exist
cache_exist = os.path.exists(cache_path)
print(f"> cache file exists={cache_exist}")
if cache_expected_to_exist and not cache_exist:
print("=== Cache does not exist! ====")
# apply the transformation with the new fingerprint
dset = dset.map(
Transformation(),
batched=True,
num_proc=num_proc,
new_fingerprint=new_fingerprint,
desc="mapping dataset with transformation")
generate_dataset()
for num_proc in [1, 2]:
print(f"# num_proc={num_proc}, first pass")
# first pass to generate the cache (always create a new cache here)
process_dataset_with_cache(remove_cache=True,
num_proc=num_proc,
cache_expected_to_exist=False)
print(f"# num_proc={num_proc}, second pass")
# second pass, expects the cache to exist
process_dataset_with_cache(remove_cache=False,
num_proc=num_proc,
cache_expected_to_exist=True)
os.remove(filename)
```
## Expected results
In the above python example, with `num_proc=2`, the **cache file should exist in the second call** of `process_dataset_with_cache` ("=== Cache does not exist! ====" should not be printed).
When the cache is successfully created, `map()` is called only one time.
## Actual results
In the above python example, with `num_proc=2`, the **cache does not exist in the second call** of `process_dataset_with_cache` (this results in printing "=== Cache does not exist! ====").
Because the cache doesn't exist, the `map()` method is executed a second time and the dataset is not loaded from the cache.
## Environment info
- `datasets` version: 1.12.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 5.0.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3044/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3044/timeline | null | null | null | null | false | [
"Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementing a custom `__getstate__` method for example.\r\n\r\nHowever it sounds pretty complicated for a simple thing. Maybe one idea would be to have something similar to streamlit: they allow users to register the hashing of their own objects.\r\n\r\nSee the documentation about their `hash_funcs` here: https://docs.streamlit.io/library/advanced-features/caching#the-hash_funcs-parameter\r\n\r\nHere is the example they give:\r\n\r\n```python\r\nclass FileReference:\r\n def __init__(self, filename):\r\n self.filename = filename\r\n\r\ndef hash_file_reference(file_reference):\r\n filename = file_reference.filename\r\n return (filename, os.path.getmtime(filename))\r\n\r\[email protected](hash_funcs={FileReference: hash_file_reference})\r\ndef func(file_reference):\r\n ...\r\n```",
"My solution was to generate a custom hash, and use the hash as a `new_fingerprint` argument to the `map()` method to enable caching. This works, but is quite hacky.\r\n\r\n@lhoestq, this approach is very neat, this would make the whole caching mechanic more explicit. I don't have so much time to look into this right now, but I might give it a try in the future. ",
"Almost a year later and I'm in a similar boat. Using custom fingerprints and when using multiprocessing the cached datasets are saved with a template at the end of the filename (something like \"000001_of_000008\" for every process of num_proc). So if in the next time you run the script you set num_proc to a different number, the cache cannot be used.\r\n\r\nIs there any way to get around this? I am processing a huge dataset so I do the processing on one machine and then transfer the processed data to another in its cache dir but currently that's not possible due to num_proc mismatch. "
] |
https://api.github.com/repos/huggingface/datasets/issues/764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/764/comments | https://api.github.com/repos/huggingface/datasets/issues/764/events | https://github.com/huggingface/datasets/pull/764 | 730,617,828 | MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2 | 764 | Adding Issue Template for Dataset Requests | [] | closed | false | null | 0 | 2020-10-27T16:37:08Z | 2020-10-27T17:25:26Z | 2020-10-27T17:25:25Z | null | adding .github/ISSUE_TEMPLATE/add-dataset.md | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/764/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/764.diff",
"html_url": "https://github.com/huggingface/datasets/pull/764",
"merged_at": "2020-10-27T17:25:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/764.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/764"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6088/comments | https://api.github.com/repos/huggingface/datasets/issues/6088/events | https://github.com/huggingface/datasets/issues/6088 | 1,825,665,235 | I_kwDODunzps5s0XDT | 6,088 | Loading local data files initiates web requests | [] | closed | false | null | 0 | 2023-07-28T04:06:26Z | 2023-07-28T05:02:22Z | 2023-07-28T05:02:22Z | null | As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_dataset.json')
```
But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows
```
in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode)
940 self.download_config = download_config
941 self.download_mode = download_mode
--> 942 increase_load_count(name, resource_type="dataset")
```
I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6088/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5739/comments | https://api.github.com/repos/huggingface/datasets/issues/5739/events | https://github.com/huggingface/datasets/issues/5739 | 1,663,762,901 | I_kwDODunzps5jKwHV | 5,739 | weird result during dataset split when data path starts with `/data` | [] | open | false | null | 4 | 2023-04-12T04:51:35Z | 2023-04-21T14:20:59Z | null | null | ### Describe the bug
The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158
will cause a weird result during dataset split when data path starts with `/data`
### Steps to reproduce the bug
1. clone dataset into local path
```
cd /data/train/raw/
git lfs clone https://huggingface.co/datasets/deepmind/code_contests.git
ls /data/train/raw/code_contests
# README.md data dataset_infos.json
ls /data/train/raw/code_contests/data
# test-00000-of-00001-9c49eeff30aacaa8.parquet
# train-[0-9]+-of-[0-9]+-xx.parquet
# valid-00000-of-00001-5e672c5751f060d3.parquet
```
2. loading data from local
```
from datasets import load_dataset
dataset = load_dataset('/data/train/raw/code_contests')
FileNotFoundError: Unable to resolve any data file that matches '['data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*']' at /data/train/raw/code_contests with any supported extension
```
weird path `data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*`
While dive deep into `LocalDatasetModuleFactoryWithoutScript` defined in [load.py](https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/load.py#L627) and _get_data_files_patterns https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/data_files.py#L228. I found the weird behavior caused by `string_to_dict`
3. check `string_to_dict`
```
p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*'
string_to_dict(p, split_pattern)
# {'split': 'train/raw/code_contests/data/test'}
p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
string_to_dict(p, split_pattern)
{'split': 'test'}
```
go deep into string_to_dict https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158.
4. test the regex:
<img width="680" alt="image" src="https://user-images.githubusercontent.com/1772912/231351129-75179f01-fb9f-4f12-8fa9-0dfcc3d5f3bd.png">
<img width="679" alt="image" src="https://user-images.githubusercontent.com/1772912/231351025-009f3d83-2cf3-4e15-9ed4-6b9663dcb2ee.png">
### Expected behavior
statement in `steps to reproduce the bug`
3. check `string_to_dict`
```
p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*'
string_to_dict(p, split_pattern)
# {'split': 'train/raw/code_contests/data/test'}
p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
string_to_dict(p, split_pattern)
{'split': 'test'}
```
### Environment info
- linux(debian)
- python 3.7
- datasets 2.8.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5739/timeline | null | null | null | null | false | [
"Same problem.",
"hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ",
"> hi! I think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. @ericxsun Do you want to open a PR to fix the regex? As you already found the solution :)\r\n\r\nSure, please see https://github.com/huggingface/datasets/pull/5748 @polinaeterna ",
"I think `string_to_dict` is ok, and that the issue is that it gets `'/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'` as input instead of `'data/test-00000-of-00001-9c49eeff30aacaa8.parquet'`. The path should be relative to the directory being loaded by `load_dataset`"
] |
https://api.github.com/repos/huggingface/datasets/issues/5094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5094/comments | https://api.github.com/repos/huggingface/datasets/issues/5094/events | https://github.com/huggingface/datasets/issues/5094 | 1,403,214,950 | I_kwDODunzps5To1xm | 5,094 | Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 11 | 2022-10-10T13:50:56Z | 2023-07-24T15:29:13Z | 2023-07-24T15:29:13Z | null | ## Describe the bug
There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datasets/blob/1b935dab9d2f171a8c6294269421fe967eb55e34/src/datasets/arrow_dataset.py#L2663) go into wait mode forever.
## Steps to reproduce the bug
The below code goes into deadlock when `NUMBER_OF_PROCESSES` is greater than one.
```python
NUMBER_OF_PROCESSES = 2
from transformers import AutoTokenizer, AutoModel
from datasets import load_dataset
dataset = load_dataset("glue", "mrpc", split="train")
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
model.to("cpu")
def cls_pooling(model_output):
return model_output.last_hidden_state[:, 0]
def generate_embeddings_batched(examples):
sentences_batch = list(examples['sentence1'])
encoded_input = tokenizer(
sentences_batch, padding=True, truncation=True, return_tensors="pt"
)
encoded_input = {k: v.to("cpu") for k, v in encoded_input.items()}
model_output = model(**encoded_input)
embeddings = cls_pooling(model_output)
examples['embeddings'] = embeddings.detach().cpu().numpy() # 64, 384
return examples
embeddings_dataset = dataset.map(
generate_embeddings_batched,
batched=True,
batch_size=10,
num_proc=NUMBER_OF_PROCESSES
)
```
While debugging it I've seen that it gets "stuck" when calling `torch.nn.Embedding.forward` but some testing shows that the same happens with other functions from `torch.nn`.
## Environment info
- Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.31
- Python version: 3.9.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
Not sure if this is a HF problem, a PyTorch problem or something I'm doing wrong..
Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5094/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5094/timeline | null | completed | null | null | false | [
"Hi ! Could it be an Out of Memory issue that could have killed one of the processes ? can you check your memory ?",
"Hi! I don't think it is a memory issue. I'm monitoring the main and spawn python processes and threads with `htop` and the memory does not peak. Besides, the example I've posted above should not be that demanding in terms of memory, right? (I have 32GB of RAM). ",
"Indeed it should be fine. I couldn't reproduce the error though - I ran your script on my side and it works fine. What version of pytorch are you using ?",
"Interesting.. I'm using `torch 1.12.1`",
"I also tried on colab and it works fine 🤔 \r\nMaybe something is wrong with your installation of pytorch ?",
"Oh actually I just saw that you're using python 3.9\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/4113\r\n\r\nWe'll fix that as soon as we can, in the meantime you can try to use use single process, or use an older version of python maybe ?",
"I tried with python 3.7 and the issue persists. In collab, which also uses 3.7 I don't get the issue, so yes I guess is something on mu side... will post it here if I manage to fix it",
"Hi! Which version of transformers are you using? I test the code on Colab (so python 3.7) with transformers 4.23.1, torch 1.12.1 and pyarrow 9.0.0 (also 6.x), it worked without stuck.",
"Hi, I have the same problem in use **datasets.IterableDatasetDict.map()**\r\nmy pytorch is 2.0.0a0+gitc263bd4\r\nmy python is 3.8.16(default, Jun 12 2023, 17:37:21)\r\nwork on aarch64 in 16 node, each node with 4*nVidia-A100-40G\r\nevery node have 4 process execute code as ↓\r\n\r\n```\r\nfrom datasets import load_dataset, interleave_datasets, IterableDatasetDict, concatenate_datasets\r\n```\r\n...\r\n```\r\n model_args.cache_dir = '/home/scx/.cache'\r\n for dataset_name in data_args.datasets_name:\r\n train_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='train'\r\n ).select_columns('text')\r\n )\r\n valid_datasets.append(\r\n load_dataset(\r\n dataset_name,\r\n cache_dir=model_args.cache_dir,\r\n use_auth_token=True if model_args.use_auth_token else None,\r\n streaming=data_args.streaming,\r\n split='validation'\r\n ).select_columns('text')\r\n )\r\n train_dataset = interleave_datasets(train_datasets,\r\n probabilities=data_args.datasets_probabilities, \r\n seed=training_args.seed,\r\n stopping_strategy='all_exhausted')\r\n raw_datasets = IterableDatasetDict({'train': train_dataset, 'validation': valid_dataset})\r\n```\r\n...\r\n\r\n```\r\n tokenized_datasets = None\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n if not data_args.streaming:\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n remove_columns=column_names,\r\n )\r\n else:\r\n #TODO 20230722\r\n logger.info('{}: {}'.format(__file__, 'tokenized_datasets = raw_datasets.map('))\r\n logger.info('len raw_datasets: {}'.format(len(raw_datasets.items())))\r\n logger.info('raw_datasets:{}'.format(raw_datasets.items()))\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n batch_size=1000,\r\n remove_columns=column_names\r\n )\r\n logger.info('map ok!')\r\n logger.info('show train: {}'.format(next(iter(tokenized_datasets['train']))))\r\n logger.info('ok')\r\n # ### RAW CODE ###\r\n # tokenized_datasets = raw_datasets.map(\r\n # tokenize_function,\r\n # batched=True,\r\n # batch_size=1000,\r\n # remove_columns=column_names\r\n # )\r\n #TODO 20230722\r\n logger.info(\"Finish tokenization\")\r\n```\r\nthe output of my code is\r\n```\r\n07/22/2023 21:57:09 - INFO - __main__ - /demo/run_blue_space.py: tokenized_datasets = raw_datasets.map(\r\n07/22/2023 21:57:09 - INFO - __main__ - len raw_datasets: 2\r\n07/22/2023 21:57:09 - INFO - __main__ - raw_datasets:dict_items([('train', <datasets.iterable_dataset.IterableDataset object at 0x4005ee301190>), ('validation', <datasets.iterable_dataset.IterableDataset object at 0x4005ee5427f0>)])\r\n07/22/2023 21:57:09 - INFO - __main__ - map ok!\r\n07/22/2023 22:01:07 - INFO - __main__ - show train: {'input_ids': [14608, 26797, 31891, 34260, 12227, 33207, 5, 5, 31632, 26797, 31891, 34260, 12227, 33207, 7398, 28561, 31236, 31177, 31253, 33558, 31556, 31377, 72, 20732, 32383, 32295, 14027, 31178, 53, 61, 53, 55, 31189, 31146, 31321, 31235, 53, 61, 56, 58, 31189, 31145, 72, 53, 61, 58, 54, 31189, 54, 31245, 53, 60, 31224, 31896, 31178, 28561, 29331, 20732, 31888, 32637, 4426, 2824, 72, 53, 61, 60, 55, 31189, 53, 54, 31245, 53, 31224, 31896, 31178, 28561, 29331, 26137, 20732, 4426, 2824, 73, 54, 52, 52, 52, 31189, 61, 31245, 59, 31224, 31896, 31178, 29331, 28561, 20732, 4426, 2824, 73, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n07/22/2023 22:01:07 - INFO - __main__ - ok\r\n```\r\n\r\n",
"@bio-punk `IterableDatasetDict.map` does not support multiprocessing (only `DatasetDict.map` and `Dataset.map` do), so please open a new issue as this doesn't seem to be related to the original issue. ",
"Closing as this issue doesn't seem to be related to `datasets`."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.